Feb 26 08:12:43 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 08:12:43 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:43 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 08:12:44 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 08:12:45 crc kubenswrapper[4741]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.473622 4741 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482655 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482695 4741 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482706 4741 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482718 4741 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482733 4741 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482745 4741 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482754 4741 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482763 4741 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482771 4741 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482780 4741 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482788 4741 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482795 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482803 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482811 4741 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482819 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482827 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482835 4741 feature_gate.go:330] unrecognized feature gate: Example Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482843 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482851 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482859 4741 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482867 4741 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482875 4741 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482883 4741 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482891 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482899 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482906 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482914 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482922 4741 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482932 4741 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482943 4741 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482951 4741 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482970 4741 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482978 4741 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482986 4741 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.482996 4741 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483004 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483011 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483019 4741 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483027 4741 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483035 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483043 4741 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483050 4741 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483058 4741 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483066 4741 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483074 4741 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483082 4741 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483090 4741 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483098 4741 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483105 4741 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483163 4741 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483176 4741 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483185 4741 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483194 4741 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483202 4741 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483213 4741 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483224 4741 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483231 4741 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483239 4741 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483250 4741 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483258 4741 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483268 4741 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483278 4741 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483287 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483295 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483303 4741 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483311 4741 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483319 4741 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483329 4741 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483337 4741 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483345 4741 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.483353 4741 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483560 4741 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483581 4741 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483601 4741 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483614 4741 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483628 4741 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483637 4741 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483650 4741 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483662 4741 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483671 4741 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483682 4741 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483693 4741 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483703 4741 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483712 4741 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483721 4741 flags.go:64] FLAG: --cgroup-root="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483731 4741 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483741 4741 flags.go:64] FLAG: --client-ca-file="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483751 4741 flags.go:64] FLAG: --cloud-config="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483760 4741 flags.go:64] FLAG: --cloud-provider="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483768 4741 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483781 4741 flags.go:64] FLAG: --cluster-domain="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483790 4741 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483799 4741 flags.go:64] FLAG: --config-dir="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483808 4741 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483818 4741 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483831 4741 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483840 4741 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483849 4741 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483859 4741 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483868 4741 flags.go:64] FLAG: --contention-profiling="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483877 4741 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483887 4741 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483896 4741 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483905 4741 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483917 4741 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483926 4741 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483936 4741 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483945 4741 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483955 4741 flags.go:64] FLAG: --enable-server="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483964 4741 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483976 4741 flags.go:64] FLAG: --event-burst="100" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483986 4741 flags.go:64] FLAG: --event-qps="50" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.483995 4741 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484004 4741 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484013 4741 flags.go:64] FLAG: --eviction-hard="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484024 4741 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484033 4741 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484042 4741 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484053 4741 flags.go:64] FLAG: --eviction-soft="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484063 4741 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484073 4741 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484083 4741 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484092 4741 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484101 4741 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484142 4741 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484155 4741 flags.go:64] FLAG: --feature-gates="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484172 4741 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484184 4741 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484196 4741 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484208 4741 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484219 4741 flags.go:64] FLAG: --healthz-port="10248" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484231 4741 flags.go:64] FLAG: --help="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484243 4741 flags.go:64] FLAG: --hostname-override="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484252 4741 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484262 4741 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484271 4741 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484280 4741 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484290 4741 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484299 4741 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484308 4741 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484316 4741 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484325 4741 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484335 4741 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484345 4741 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484354 4741 flags.go:64] FLAG: --kube-reserved="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484364 4741 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484373 4741 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484382 4741 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484392 4741 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484401 4741 flags.go:64] FLAG: --lock-file="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484410 4741 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484419 4741 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484432 4741 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484467 4741 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484478 4741 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484489 4741 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484500 4741 flags.go:64] FLAG: --logging-format="text" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484509 4741 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484520 4741 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484529 4741 flags.go:64] FLAG: --manifest-url="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484538 4741 flags.go:64] FLAG: --manifest-url-header="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484553 4741 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484565 4741 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484578 4741 flags.go:64] FLAG: --max-pods="110" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484587 4741 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484597 4741 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484606 4741 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484615 4741 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484626 4741 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484635 4741 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484645 4741 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484666 4741 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484675 4741 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484685 4741 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484694 4741 flags.go:64] FLAG: --pod-cidr="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484703 4741 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484719 4741 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484728 4741 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484738 4741 flags.go:64] FLAG: --pods-per-core="0" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484747 4741 flags.go:64] FLAG: --port="10250" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484757 4741 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484766 4741 flags.go:64] FLAG: --provider-id="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484775 4741 flags.go:64] FLAG: --qos-reserved="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484784 4741 flags.go:64] FLAG: --read-only-port="10255" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484794 4741 flags.go:64] FLAG: --register-node="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484802 4741 flags.go:64] FLAG: --register-schedulable="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484811 4741 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484827 4741 flags.go:64] FLAG: --registry-burst="10" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484836 4741 flags.go:64] FLAG: --registry-qps="5" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484845 4741 flags.go:64] FLAG: --reserved-cpus="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484854 4741 flags.go:64] FLAG: --reserved-memory="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484866 4741 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484875 4741 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484884 4741 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484894 4741 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484902 4741 flags.go:64] FLAG: --runonce="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484911 4741 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484921 4741 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484931 4741 flags.go:64] FLAG: --seccomp-default="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484940 4741 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484949 4741 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484960 4741 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484970 4741 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484979 4741 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484989 4741 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.484998 4741 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485007 4741 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485016 4741 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485026 4741 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485035 4741 flags.go:64] FLAG: --system-cgroups="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485044 4741 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485057 4741 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485066 4741 flags.go:64] FLAG: --tls-cert-file="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485076 4741 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485088 4741 flags.go:64] FLAG: --tls-min-version="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485097 4741 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485106 4741 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485161 4741 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485173 4741 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485182 4741 flags.go:64] FLAG: --v="2" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485197 4741 flags.go:64] FLAG: --version="false" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485210 4741 flags.go:64] FLAG: --vmodule="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485222 4741 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.485232 4741 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485474 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485484 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485494 4741 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485503 4741 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485514 4741 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485525 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485535 4741 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485544 4741 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485555 4741 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485566 4741 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485575 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485585 4741 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485594 4741 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485602 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485610 4741 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485619 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485627 4741 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485635 4741 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485643 4741 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485651 4741 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485659 4741 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485668 4741 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485677 4741 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485685 4741 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485693 4741 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485700 4741 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485708 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485716 4741 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485724 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485735 4741 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485744 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485751 4741 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485760 4741 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485768 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485776 4741 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485784 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485792 4741 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485799 4741 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485808 4741 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485816 4741 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485825 4741 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485833 4741 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485841 4741 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485849 4741 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485860 4741 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485869 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485878 4741 feature_gate.go:330] unrecognized feature gate: Example Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485886 4741 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485895 4741 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485903 4741 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485912 4741 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485920 4741 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485929 4741 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485937 4741 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485946 4741 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485954 4741 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485963 4741 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485971 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485979 4741 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485987 4741 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.485996 4741 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486004 4741 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486012 4741 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486019 4741 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486028 4741 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486036 4741 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486045 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486052 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486060 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486068 4741 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.486076 4741 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.486100 4741 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.501774 4741 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.501857 4741 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.501991 4741 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502006 4741 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502016 4741 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502025 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502034 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502042 4741 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502052 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502060 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502068 4741 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502076 4741 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502085 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502094 4741 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502101 4741 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502140 4741 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502153 4741 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502168 4741 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502179 4741 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502187 4741 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502197 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502205 4741 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502214 4741 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502222 4741 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502230 4741 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502238 4741 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502245 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502254 4741 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502262 4741 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502270 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502277 4741 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502285 4741 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502295 4741 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502305 4741 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502314 4741 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502322 4741 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502332 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502340 4741 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502348 4741 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502356 4741 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502364 4741 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502372 4741 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502379 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502387 4741 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502395 4741 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502403 4741 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502411 4741 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502419 4741 feature_gate.go:330] unrecognized feature gate: Example Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502430 4741 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502440 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502451 4741 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502459 4741 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502467 4741 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502474 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502482 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502491 4741 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502498 4741 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502506 4741 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502514 4741 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502523 4741 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502531 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502540 4741 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502548 4741 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502559 4741 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502568 4741 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502577 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502585 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502592 4741 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502600 4741 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502607 4741 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502616 4741 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502623 4741 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502634 4741 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.502648 4741 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502886 4741 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502897 4741 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502908 4741 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502919 4741 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502927 4741 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502936 4741 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502945 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502952 4741 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502961 4741 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502970 4741 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502978 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502987 4741 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.502995 4741 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503004 4741 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503012 4741 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503020 4741 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503029 4741 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503041 4741 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503051 4741 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503060 4741 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503069 4741 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503077 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503086 4741 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503096 4741 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503105 4741 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503137 4741 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503145 4741 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503155 4741 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503163 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503171 4741 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503182 4741 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503191 4741 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503199 4741 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503207 4741 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503218 4741 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503226 4741 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503234 4741 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503242 4741 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503250 4741 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503257 4741 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503265 4741 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503274 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503281 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503290 4741 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503297 4741 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503305 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503313 4741 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503320 4741 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503328 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503336 4741 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503344 4741 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503351 4741 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503359 4741 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503367 4741 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503375 4741 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503382 4741 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503390 4741 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503398 4741 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503406 4741 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503414 4741 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503421 4741 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503429 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503462 4741 feature_gate.go:330] unrecognized feature gate: Example Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503470 4741 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503478 4741 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503485 4741 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503493 4741 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503501 4741 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503509 4741 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503517 4741 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.503525 4741 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.503538 4741 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.504779 4741 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.509750 4741 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.516831 4741 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.517092 4741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.519688 4741 server.go:997] "Starting client certificate rotation" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.519746 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.520098 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.552743 4741 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.554549 4741 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.559929 4741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.582638 4741 log.go:25] "Validated CRI v1 runtime API" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.624186 4741 log.go:25] "Validated CRI v1 image API" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.630672 4741 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.637725 4741 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-08-07-59-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.637800 4741 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.668021 4741 manager.go:217] Machine: {Timestamp:2026-02-26 08:12:45.662897277 +0000 UTC m=+0.658834704 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:76b81fca-617e-45c9-86c2-b22f80bbe1d0 BootID:5b187775-8409-4c81-b985-3b98d85603dc Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a5:66:16 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a5:66:16 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ac:c7:ee Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a2:b8:01 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4a:51:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b3:78:7c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:03:ba:47:fe:e7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:d0:3e:42:ad:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.668503 4741 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.668717 4741 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.669228 4741 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.669546 4741 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.669600 4741 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.669960 4741 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.669979 4741 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.670699 4741 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.670751 4741 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.671725 4741 state_mem.go:36] "Initialized new in-memory state store" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.671955 4741 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.676891 4741 kubelet.go:418] "Attempting to sync node with API server" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.676925 4741 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.676966 4741 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.676990 4741 kubelet.go:324] "Adding apiserver pod source" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.677008 4741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.682683 4741 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.684021 4741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.685174 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.685178 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.685406 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.685416 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.685907 4741 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.687914 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.687969 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.687985 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.687998 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688020 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688033 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688047 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688068 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688084 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688097 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688158 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.688173 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.692963 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.693309 4741 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.695472 4741 server.go:1280] "Started kubelet" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.695706 4741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.696159 4741 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.697729 4741 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 08:12:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.700105 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.700184 4741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.700629 4741 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.700679 4741 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.700731 4741 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.700644 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702169 4741 factory.go:55] Registering systemd factory Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702206 4741 factory.go:221] Registration of the systemd container factory successfully Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.702698 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702790 4741 factory.go:153] Registering CRI-O factory Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702829 4741 factory.go:221] Registration of the crio container factory successfully Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.702830 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702968 4741 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.703035 4741 factory.go:103] Registering Raw factory Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.703072 4741 manager.go:1196] Started watching for new ooms in manager Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.702990 4741 server.go:460] "Adding debug handlers to kubelet server" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.703291 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.705056 4741 manager.go:319] Starting recovery of all containers Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.706183 4741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897bdb583fe3383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,LastTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.721856 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722076 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722155 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722597 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722745 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722778 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722858 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722890 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.722999 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723034 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723062 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723203 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723410 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723589 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723696 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723783 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723823 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723872 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723905 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723964 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.723999 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724031 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724071 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724103 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724182 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724224 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724274 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724309 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724349 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724383 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724421 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724448 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724477 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724516 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724550 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724584 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724623 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724656 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724700 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724731 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724765 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724802 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724834 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724871 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724900 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724934 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.724975 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725005 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725041 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725078 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725166 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725207 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725260 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725297 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725349 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725386 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725419 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.725457 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726632 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726680 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726714 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726745 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726777 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726806 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726841 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726871 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726901 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726928 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.726956 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727000 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727028 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727056 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727086 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727150 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727178 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727205 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727235 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727279 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727308 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727337 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727369 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727401 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727429 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727462 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727495 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727523 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727552 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727620 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727660 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727692 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727719 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727747 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727776 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727806 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727834 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727864 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727894 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727922 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727952 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.727985 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728016 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728048 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728076 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728144 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728199 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728233 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728271 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728304 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728338 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728371 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728409 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728444 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728476 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728506 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728536 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728567 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728597 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728625 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728654 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728684 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728796 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728837 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728872 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728901 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728929 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728960 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.728989 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729027 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729060 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729089 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729155 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729186 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729229 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729261 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729292 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729323 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729353 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729387 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729414 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729442 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729471 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729504 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729532 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729567 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729595 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729625 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729652 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729677 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729706 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729734 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729764 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729790 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729816 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729846 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729876 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729902 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729931 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729958 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.729983 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730020 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730047 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730074 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730101 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730177 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730204 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730232 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730302 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730332 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730360 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730387 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730413 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730444 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730472 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730501 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730535 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730568 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730594 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730621 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730650 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730678 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730705 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730731 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730761 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730788 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730816 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730840 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730864 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730894 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730919 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730948 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.730973 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731008 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731038 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731066 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731093 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731156 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731184 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731212 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731242 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731269 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.731295 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734297 4741 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734357 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734398 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734431 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734459 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734488 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734521 4741 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734546 4741 reconstruct.go:97] "Volume reconstruction finished" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.734565 4741 reconciler.go:26] "Reconciler: start to sync state" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.743667 4741 manager.go:324] Recovery completed Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.766544 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.769700 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.769759 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.769779 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.771258 4741 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.771291 4741 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.771361 4741 state_mem.go:36] "Initialized new in-memory state store" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.781246 4741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.785077 4741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.785417 4741 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.785653 4741 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.785980 4741 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 08:12:45 crc kubenswrapper[4741]: W0226 08:12:45.786327 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.786426 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.789806 4741 policy_none.go:49] "None policy: Start" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.791490 4741 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.791522 4741 state_mem.go:35] "Initializing new in-memory state store" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.801376 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.861893 4741 manager.go:334] "Starting Device Plugin manager" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.862017 4741 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.862069 4741 server.go:79] "Starting device plugin registration server" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.862974 4741 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.863040 4741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.863287 4741 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.863428 4741 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.863453 4741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.872210 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.886607 4741 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.886748 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.888070 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.888129 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.888143 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.888338 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889013 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889089 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889130 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889105 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889145 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889517 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889782 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.889872 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891097 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891236 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891328 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891204 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891508 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891528 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891800 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891339 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.891981 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.892002 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.892302 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.892390 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.893845 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.893957 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894039 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894245 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894390 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894481 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894212 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894614 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.894642 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.899500 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.899542 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.899564 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.899830 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.899879 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.900938 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.900985 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.900999 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.902501 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.902544 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.902558 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.904075 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937191 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937258 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937294 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937330 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937393 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937436 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937465 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937490 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937517 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937542 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937630 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937699 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937733 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937768 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.937801 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.963577 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.964671 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.964710 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.964724 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:45 crc kubenswrapper[4741]: I0226 08:12:45.964759 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:45 crc kubenswrapper[4741]: E0226 08:12:45.965390 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.039790 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.039881 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.039916 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.039947 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.039990 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040033 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040071 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040146 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040184 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040231 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040274 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040309 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040341 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040375 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040407 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040858 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.040896 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041032 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041057 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041086 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041170 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041189 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041149 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041105 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041210 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041250 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041361 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041518 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.041633 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.166200 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.168701 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.168750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.168769 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.168806 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.169411 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.226923 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.249570 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.265468 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.287970 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1cad54d3f9004df74f665c0377dfff5fc755bc894dafd5d2bcf28b160468f140 WatchSource:0}: Error finding container 1cad54d3f9004df74f665c0377dfff5fc755bc894dafd5d2bcf28b160468f140: Status 404 returned error can't find the container with id 1cad54d3f9004df74f665c0377dfff5fc755bc894dafd5d2bcf28b160468f140 Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.288853 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b0ee14d481b966733a682328c1434bf442406401b3316cb4ec86488a9839afde WatchSource:0}: Error finding container b0ee14d481b966733a682328c1434bf442406401b3316cb4ec86488a9839afde: Status 404 returned error can't find the container with id b0ee14d481b966733a682328c1434bf442406401b3316cb4ec86488a9839afde Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.291615 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-63a5efb8be20780438a88c6e0d9b5dc9b250cb04d2c2198c22096e052f6e05d1 WatchSource:0}: Error finding container 63a5efb8be20780438a88c6e0d9b5dc9b250cb04d2c2198c22096e052f6e05d1: Status 404 returned error can't find the container with id 63a5efb8be20780438a88c6e0d9b5dc9b250cb04d2c2198c22096e052f6e05d1 Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.295439 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.304670 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.304863 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.318939 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-986d25285bf4a134633b09ba05f4b660fe0c3f318054f1f1fd5ec012acd4d225 WatchSource:0}: Error finding container 986d25285bf4a134633b09ba05f4b660fe0c3f318054f1f1fd5ec012acd4d225: Status 404 returned error can't find the container with id 986d25285bf4a134633b09ba05f4b660fe0c3f318054f1f1fd5ec012acd4d225 Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.344907 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9cd9cc597355717802a11d2dcc302708fee0b7a23e8da543bb4025b664077b95 WatchSource:0}: Error finding container 9cd9cc597355717802a11d2dcc302708fee0b7a23e8da543bb4025b664077b95: Status 404 returned error can't find the container with id 9cd9cc597355717802a11d2dcc302708fee0b7a23e8da543bb4025b664077b95 Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.489758 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.489925 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.570190 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.571773 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.571878 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.571905 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.571956 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.572747 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.694239 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.753661 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.753774 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.791861 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9cd9cc597355717802a11d2dcc302708fee0b7a23e8da543bb4025b664077b95"} Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.793618 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"986d25285bf4a134633b09ba05f4b660fe0c3f318054f1f1fd5ec012acd4d225"} Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.795802 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"63a5efb8be20780438a88c6e0d9b5dc9b250cb04d2c2198c22096e052f6e05d1"} Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.797839 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0ee14d481b966733a682328c1434bf442406401b3316cb4ec86488a9839afde"} Feb 26 08:12:46 crc kubenswrapper[4741]: I0226 08:12:46.799358 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1cad54d3f9004df74f665c0377dfff5fc755bc894dafd5d2bcf28b160468f140"} Feb 26 08:12:46 crc kubenswrapper[4741]: W0226 08:12:46.869736 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:46 crc kubenswrapper[4741]: E0226 08:12:46.869888 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:47 crc kubenswrapper[4741]: E0226 08:12:47.106226 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Feb 26 08:12:47 crc kubenswrapper[4741]: W0226 08:12:47.163894 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:47 crc kubenswrapper[4741]: E0226 08:12:47.164036 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.373096 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.374662 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.374775 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.374843 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.374937 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:47 crc kubenswrapper[4741]: E0226 08:12:47.375753 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.634000 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 08:12:47 crc kubenswrapper[4741]: E0226 08:12:47.636358 4741 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.693929 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.811416 4741 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9" exitCode=0 Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.811658 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.811697 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.813483 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.813572 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.813600 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.814909 4741 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a" exitCode=0 Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.815046 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.815071 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.816621 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.816689 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.816708 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.817709 4741 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84" exitCode=0 Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.817823 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.817887 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.819628 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.819668 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.819688 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.822782 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.822856 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.822877 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.825990 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008" exitCode=0 Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.826046 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008"} Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.826258 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.827910 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.827956 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.827975 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.830616 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.832085 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.832177 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:47 crc kubenswrapper[4741]: I0226 08:12:47.832198 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: W0226 08:12:48.592270 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:48 crc kubenswrapper[4741]: E0226 08:12:48.592403 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:48 crc kubenswrapper[4741]: W0226 08:12:48.664160 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:48 crc kubenswrapper[4741]: E0226 08:12:48.664283 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.694764 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:48 crc kubenswrapper[4741]: E0226 08:12:48.707811 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="3.2s" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.833632 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.833690 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.833705 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.833792 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.835256 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.835296 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.835310 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.838276 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.838367 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.839549 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.839589 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.839608 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.842605 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.842679 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.842699 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.842712 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.844923 4741 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da" exitCode=0 Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.845036 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.845037 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.846196 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.846240 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.846254 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.848820 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9"} Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.849085 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.850366 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.850403 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.850423 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.976783 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.978771 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.978831 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.978842 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:48 crc kubenswrapper[4741]: I0226 08:12:48.978869 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:48 crc kubenswrapper[4741]: E0226 08:12:48.979527 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Feb 26 08:12:49 crc kubenswrapper[4741]: W0226 08:12:49.084041 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:49 crc kubenswrapper[4741]: E0226 08:12:49.084171 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:49 crc kubenswrapper[4741]: W0226 08:12:49.202412 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Feb 26 08:12:49 crc kubenswrapper[4741]: E0226 08:12:49.202540 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.859391 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e0a8e404e312403c3908b5df533746b82086c155e31afb1a3761de7eba56ebd"} Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.859463 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.861087 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.861159 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.861176 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863494 4741 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0" exitCode=0 Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863545 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0"} Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863670 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863682 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863711 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863745 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.863752 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865406 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865475 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865496 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865790 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.865809 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.866854 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.866918 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.866956 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.867065 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.867155 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:49 crc kubenswrapper[4741]: I0226 08:12:49.867193 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.598241 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.875398 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a"} Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.875487 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb"} Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.875504 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.875599 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.875515 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0"} Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.876968 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.877028 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:50 crc kubenswrapper[4741]: I0226 08:12:50.877047 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.299955 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.444085 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.444431 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.446075 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.446186 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.446207 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.452355 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.701984 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885104 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885147 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4"} Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885271 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4"} Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885323 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885417 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.885814 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.886048 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.886080 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.886090 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.887013 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.887082 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.887144 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.889020 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.889066 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:51 crc kubenswrapper[4741]: I0226 08:12:51.889078 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.180051 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.182009 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.182086 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.182150 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.182199 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.889342 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.889416 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.889428 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891398 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891423 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891452 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891472 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891478 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:52 crc kubenswrapper[4741]: I0226 08:12:52.891505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.622547 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.622825 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.624717 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.624816 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.624839 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.712676 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.813571 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.892247 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.892262 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.893974 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.894022 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.894037 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.893986 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.894151 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.894172 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.945333 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.945591 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.947599 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.947665 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:53 crc kubenswrapper[4741]: I0226 08:12:53.947679 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:54 crc kubenswrapper[4741]: I0226 08:12:54.092334 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:54 crc kubenswrapper[4741]: I0226 08:12:54.092691 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:54 crc kubenswrapper[4741]: I0226 08:12:54.094731 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:54 crc kubenswrapper[4741]: I0226 08:12:54.094789 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:54 crc kubenswrapper[4741]: I0226 08:12:54.094803 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:55 crc kubenswrapper[4741]: I0226 08:12:55.061850 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:12:55 crc kubenswrapper[4741]: I0226 08:12:55.062163 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:55 crc kubenswrapper[4741]: I0226 08:12:55.064036 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:55 crc kubenswrapper[4741]: I0226 08:12:55.064318 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:55 crc kubenswrapper[4741]: I0226 08:12:55.064347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:55 crc kubenswrapper[4741]: E0226 08:12:55.872344 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:12:58 crc kubenswrapper[4741]: I0226 08:12:58.062364 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:12:58 crc kubenswrapper[4741]: I0226 08:12:58.062495 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.033904 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.034327 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.036695 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.036765 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.036792 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:12:59 crc kubenswrapper[4741]: I0226 08:12:59.695576 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 26 08:13:00 crc kubenswrapper[4741]: W0226 08:13:00.295402 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.295531 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 08:13:00 crc kubenswrapper[4741]: W0226 08:13:00.300921 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.301045 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.304454 4741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897bdb583fe3383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,LastTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.307353 4741 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.307563 4741 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.307643 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.312713 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 26 08:13:00 crc kubenswrapper[4741]: W0226 08:13:00.315550 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.315652 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.316595 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 08:13:00 crc kubenswrapper[4741]: W0226 08:13:00.318169 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z Feb 26 08:13:00 crc kubenswrapper[4741]: E0226 08:13:00.318264 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.319575 4741 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.319656 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.607626 4741 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]log ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]etcd ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-filter ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-apiextensions-informers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-apiextensions-controllers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/crd-informer-synced ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-system-namespaces-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 26 08:13:00 crc kubenswrapper[4741]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/bootstrap-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/start-kube-aggregator-informers ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-registration-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-discovery-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]autoregister-completion ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-openapi-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 26 08:13:00 crc kubenswrapper[4741]: livez check failed Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.608193 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.698713 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:00Z is after 2026-02-23T05:33:13Z Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.916804 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.919999 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e0a8e404e312403c3908b5df533746b82086c155e31afb1a3761de7eba56ebd" exitCode=255 Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.920081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e0a8e404e312403c3908b5df533746b82086c155e31afb1a3761de7eba56ebd"} Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.920381 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.921800 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.921855 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.921880 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:00 crc kubenswrapper[4741]: I0226 08:13:00.922750 4741 scope.go:117] "RemoveContainer" containerID="8e0a8e404e312403c3908b5df533746b82086c155e31afb1a3761de7eba56ebd" Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.700231 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:01Z is after 2026-02-23T05:33:13Z Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.926913 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.929813 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db"} Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.930036 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.931539 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.931593 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:01 crc kubenswrapper[4741]: I0226 08:13:01.931613 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.699927 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:02Z is after 2026-02-23T05:33:13Z Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.935811 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.936677 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.939555 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" exitCode=255 Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.939633 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db"} Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.939727 4741 scope.go:117] "RemoveContainer" containerID="8e0a8e404e312403c3908b5df533746b82086c155e31afb1a3761de7eba56ebd" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.939978 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.942044 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.942089 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.942130 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:02 crc kubenswrapper[4741]: I0226 08:13:02.942985 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:02 crc kubenswrapper[4741]: E0226 08:13:02.943239 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.299609 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.629403 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.629947 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.631690 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.631757 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.631778 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.699915 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:03Z is after 2026-02-23T05:33:13Z Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.713418 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.946265 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.949095 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.950515 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.950580 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.950605 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:03 crc kubenswrapper[4741]: I0226 08:13:03.951538 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:03 crc kubenswrapper[4741]: E0226 08:13:03.951844 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.698765 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:04Z is after 2026-02-23T05:33:13Z Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.952070 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.953532 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.953611 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.953636 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:04 crc kubenswrapper[4741]: I0226 08:13:04.954825 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:04 crc kubenswrapper[4741]: E0226 08:13:04.955187 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.607785 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.699233 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:05Z is after 2026-02-23T05:33:13Z Feb 26 08:13:05 crc kubenswrapper[4741]: E0226 08:13:05.872519 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.954538 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.955813 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.955891 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.955909 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.957322 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:05 crc kubenswrapper[4741]: E0226 08:13:05.957663 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:05 crc kubenswrapper[4741]: I0226 08:13:05.964340 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.700103 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:06Z is after 2026-02-23T05:33:13Z Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.717488 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:06 crc kubenswrapper[4741]: E0226 08:13:06.718749 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:06Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.719426 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.719486 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.719508 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.719548 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:06 crc kubenswrapper[4741]: E0226 08:13:06.724339 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:13:06Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.957459 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.958699 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.958779 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.958793 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:06 crc kubenswrapper[4741]: I0226 08:13:06.959567 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:06 crc kubenswrapper[4741]: E0226 08:13:06.959796 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:07 crc kubenswrapper[4741]: I0226 08:13:07.701536 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:07 crc kubenswrapper[4741]: W0226 08:13:07.863923 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 26 08:13:07 crc kubenswrapper[4741]: E0226 08:13:07.864036 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:08 crc kubenswrapper[4741]: I0226 08:13:08.063251 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:13:08 crc kubenswrapper[4741]: I0226 08:13:08.063397 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:13:08 crc kubenswrapper[4741]: I0226 08:13:08.700415 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.011876 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.031238 4741 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.062483 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.062720 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.064025 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.064067 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.064080 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.080838 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.700873 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.966613 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.968195 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.968252 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:09 crc kubenswrapper[4741]: I0226 08:13:09.968274 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.314434 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb583fe3383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,LastTimestamp:2026-02-26 08:12:45.694784387 +0000 UTC m=+0.690721804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.322195 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.329242 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.335675 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.342246 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58e271481 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.865235585 +0000 UTC m=+0.861172972,LastTimestamp:2026-02-26 08:12:45.865235585 +0000 UTC m=+0.861172972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.351816 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.888095802 +0000 UTC m=+0.884033189,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.358745 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.888137944 +0000 UTC m=+0.884075341,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.365829 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.888150525 +0000 UTC m=+0.884087912,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.374297 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.889100897 +0000 UTC m=+0.885038294,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.381206 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.889140539 +0000 UTC m=+0.885077926,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.388179 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.889253315 +0000 UTC m=+0.885190732,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.395187 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.891221792 +0000 UTC m=+0.887159179,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.401048 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.891319568 +0000 UTC m=+0.887256965,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.407376 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.891446355 +0000 UTC m=+0.887383742,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.414011 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.891489977 +0000 UTC m=+0.887427404,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.421170 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.891522079 +0000 UTC m=+0.887459506,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.428835 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.89153998 +0000 UTC m=+0.887477407,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.431535 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.891970473 +0000 UTC m=+0.887907900,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.436862 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.891995145 +0000 UTC m=+0.887932572,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.438940 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.892013636 +0000 UTC m=+0.887951063,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.441407 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.893939771 +0000 UTC m=+0.889877158,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.445888 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.894030096 +0000 UTC m=+0.889967483,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.448249 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58876af86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58876af86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769789318 +0000 UTC m=+0.765726755,LastTimestamp:2026-02-26 08:12:45.894125361 +0000 UTC m=+0.890062748,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.452673 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb58875e5b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb58875e5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769737655 +0000 UTC m=+0.765675082,LastTimestamp:2026-02-26 08:12:45.894585486 +0000 UTC m=+0.890522913,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.458382 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897bdb588766ead\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897bdb588766ead default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:45.769772717 +0000 UTC m=+0.765710144,LastTimestamp:2026-02-26 08:12:45.894632998 +0000 UTC m=+0.890570425,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.466713 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5a7e5669f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.297138847 +0000 UTC m=+1.293076274,LastTimestamp:2026-02-26 08:12:46.297138847 +0000 UTC m=+1.293076274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.472521 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb5a7e5c583 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.297163139 +0000 UTC m=+1.293100556,LastTimestamp:2026-02-26 08:12:46.297163139 +0000 UTC m=+1.293100556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.478704 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb5a7f58069 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.298194025 +0000 UTC m=+1.294131422,LastTimestamp:2026-02-26 08:12:46.298194025 +0000 UTC m=+1.294131422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.484970 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb5a9c95aee openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.328855278 +0000 UTC m=+1.324792675,LastTimestamp:2026-02-26 08:12:46.328855278 +0000 UTC m=+1.324792675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.491496 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb5aafb423b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.348902971 +0000 UTC m=+1.344840398,LastTimestamp:2026-02-26 08:12:46.348902971 +0000 UTC m=+1.344840398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.498043 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb5d0b28fe0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.981672928 +0000 UTC m=+1.977610355,LastTimestamp:2026-02-26 08:12:46.981672928 +0000 UTC m=+1.977610355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.505323 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb5d0bf6941 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.982515009 +0000 UTC m=+1.978452436,LastTimestamp:2026-02-26 08:12:46.982515009 +0000 UTC m=+1.978452436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.510061 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5d0cf9d12 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.98357685 +0000 UTC m=+1.979514277,LastTimestamp:2026-02-26 08:12:46.98357685 +0000 UTC m=+1.979514277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.517428 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb5d0d4643d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.983889981 +0000 UTC m=+1.979827408,LastTimestamp:2026-02-26 08:12:46.983889981 +0000 UTC m=+1.979827408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.529584 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb5d0d65d83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.984019331 +0000 UTC m=+1.979956758,LastTimestamp:2026-02-26 08:12:46.984019331 +0000 UTC m=+1.979956758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.536575 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb5d17342ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:46.994301676 +0000 UTC m=+1.990239093,LastTimestamp:2026-02-26 08:12:46.994301676 +0000 UTC m=+1.990239093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.543225 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5d208681c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.00407606 +0000 UTC m=+2.000013457,LastTimestamp:2026-02-26 08:12:47.00407606 +0000 UTC m=+2.000013457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.550086 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb5d21e9148 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.005528392 +0000 UTC m=+2.001465819,LastTimestamp:2026-02-26 08:12:47.005528392 +0000 UTC m=+2.001465819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.556947 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb5d21ea3fe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.005533182 +0000 UTC m=+2.001470609,LastTimestamp:2026-02-26 08:12:47.005533182 +0000 UTC m=+2.001470609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.563605 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5d22be79f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.006402463 +0000 UTC m=+2.002339890,LastTimestamp:2026-02-26 08:12:47.006402463 +0000 UTC m=+2.002339890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.568942 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb5d2416621 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.007811105 +0000 UTC m=+2.003748502,LastTimestamp:2026-02-26 08:12:47.007811105 +0000 UTC m=+2.003748502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.575011 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5e9dd5239 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.403905593 +0000 UTC m=+2.399843020,LastTimestamp:2026-02-26 08:12:47.403905593 +0000 UTC m=+2.399843020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.580058 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5eadf6c61 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.420820577 +0000 UTC m=+2.416757964,LastTimestamp:2026-02-26 08:12:47.420820577 +0000 UTC m=+2.416757964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.586538 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5eafb1aa4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.42263466 +0000 UTC m=+2.418572087,LastTimestamp:2026-02-26 08:12:47.42263466 +0000 UTC m=+2.418572087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.592358 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5f9cc001c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.671205916 +0000 UTC m=+2.667143343,LastTimestamp:2026-02-26 08:12:47.671205916 +0000 UTC m=+2.667143343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.601210 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5fabb6e89 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.686897289 +0000 UTC m=+2.682834686,LastTimestamp:2026-02-26 08:12:47.686897289 +0000 UTC m=+2.682834686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.610234 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5fad8961a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.688807962 +0000 UTC m=+2.684745359,LastTimestamp:2026-02-26 08:12:47.688807962 +0000 UTC m=+2.684745359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.617833 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb602691b22 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.815719714 +0000 UTC m=+2.811657141,LastTimestamp:2026-02-26 08:12:47.815719714 +0000 UTC m=+2.811657141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.627491 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb602a5a6ff openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.819687679 +0000 UTC m=+2.815625106,LastTimestamp:2026-02-26 08:12:47.819687679 +0000 UTC m=+2.815625106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.634781 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb602c30996 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.821613462 +0000 UTC m=+2.817550889,LastTimestamp:2026-02-26 08:12:47.821613462 +0000 UTC m=+2.817550889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.644077 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb60348e933 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.830386995 +0000 UTC m=+2.826324412,LastTimestamp:2026-02-26 08:12:47.830386995 +0000 UTC m=+2.826324412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.647166 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb60b556da2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.965425058 +0000 UTC m=+2.961362465,LastTimestamp:2026-02-26 08:12:47.965425058 +0000 UTC m=+2.961362465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.651515 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb60d69e9e2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.000322018 +0000 UTC m=+2.996259435,LastTimestamp:2026-02-26 08:12:48.000322018 +0000 UTC m=+2.996259435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.653777 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb611cd1046 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.073928774 +0000 UTC m=+3.069866171,LastTimestamp:2026-02-26 08:12:48.073928774 +0000 UTC m=+3.069866171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.659552 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb611f2ee6d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.076410477 +0000 UTC m=+3.072347884,LastTimestamp:2026-02-26 08:12:48.076410477 +0000 UTC m=+3.072347884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.666428 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb6121bbe4b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.079085131 +0000 UTC m=+3.075022528,LastTimestamp:2026-02-26 08:12:48.079085131 +0000 UTC m=+3.075022528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.674172 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb612202b8a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.079375242 +0000 UTC m=+3.075312649,LastTimestamp:2026-02-26 08:12:48.079375242 +0000 UTC m=+3.075312649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.681825 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb6129be217 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.087482903 +0000 UTC m=+3.083420300,LastTimestamp:2026-02-26 08:12:48.087482903 +0000 UTC m=+3.083420300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.690657 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb612b63e76 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.089210486 +0000 UTC m=+3.085147883,LastTimestamp:2026-02-26 08:12:48.089210486 +0000 UTC m=+3.085147883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: I0226 08:13:10.697955 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.698170 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897bdb612dfd8e8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.091937 +0000 UTC m=+3.087874407,LastTimestamp:2026-02-26 08:12:48.091937 +0000 UTC m=+3.087874407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.702032 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb613927378 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.103641976 +0000 UTC m=+3.099579383,LastTimestamp:2026-02-26 08:12:48.103641976 +0000 UTC m=+3.099579383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.707975 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb613b65850 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.10599432 +0000 UTC m=+3.101931707,LastTimestamp:2026-02-26 08:12:48.10599432 +0000 UTC m=+3.101931707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.719014 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6145ac92b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.116771115 +0000 UTC m=+3.112708502,LastTimestamp:2026-02-26 08:12:48.116771115 +0000 UTC m=+3.112708502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.728225 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb620560081 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.317784193 +0000 UTC m=+3.313721580,LastTimestamp:2026-02-26 08:12:48.317784193 +0000 UTC m=+3.313721580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.735811 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb6208a6aa6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.321219238 +0000 UTC m=+3.317156645,LastTimestamp:2026-02-26 08:12:48.321219238 +0000 UTC m=+3.317156645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.743270 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb6211aa868 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.330672232 +0000 UTC m=+3.326609619,LastTimestamp:2026-02-26 08:12:48.330672232 +0000 UTC m=+3.326609619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.748724 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb62136126c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.332468844 +0000 UTC m=+3.328406271,LastTimestamp:2026-02-26 08:12:48.332468844 +0000 UTC m=+3.328406271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.760780 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb621766b39 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.336685881 +0000 UTC m=+3.332623268,LastTimestamp:2026-02-26 08:12:48.336685881 +0000 UTC m=+3.332623268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.769008 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb621fd678f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.345532303 +0000 UTC m=+3.341469690,LastTimestamp:2026-02-26 08:12:48.345532303 +0000 UTC m=+3.341469690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.783510 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb62e3b02fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.550896378 +0000 UTC m=+3.546833755,LastTimestamp:2026-02-26 08:12:48.550896378 +0000 UTC m=+3.546833755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.791548 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb62e69cbac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.553962412 +0000 UTC m=+3.549899789,LastTimestamp:2026-02-26 08:12:48.553962412 +0000 UTC m=+3.549899789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.800702 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb62fb82dac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.575876524 +0000 UTC m=+3.571813911,LastTimestamp:2026-02-26 08:12:48.575876524 +0000 UTC m=+3.571813911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.808873 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb62fc9a731 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.577021745 +0000 UTC m=+3.572959132,LastTimestamp:2026-02-26 08:12:48.577021745 +0000 UTC m=+3.572959132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.814892 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897bdb62fd6e014 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.577888276 +0000 UTC m=+3.573825673,LastTimestamp:2026-02-26 08:12:48.577888276 +0000 UTC m=+3.573825673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.821695 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb63db8688c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.81077262 +0000 UTC m=+3.806710027,LastTimestamp:2026-02-26 08:12:48.81077262 +0000 UTC m=+3.806710027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.828701 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb63ecfe96d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.829090157 +0000 UTC m=+3.825027564,LastTimestamp:2026-02-26 08:12:48.829090157 +0000 UTC m=+3.825027564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.836617 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb63eefc434 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.83117778 +0000 UTC m=+3.827115177,LastTimestamp:2026-02-26 08:12:48.83117778 +0000 UTC m=+3.827115177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.845838 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb64013fedb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.850329307 +0000 UTC m=+3.846266704,LastTimestamp:2026-02-26 08:12:48.850329307 +0000 UTC m=+3.846266704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.857855 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb64c57cde3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:49.056099811 +0000 UTC m=+4.052037198,LastTimestamp:2026-02-26 08:12:49.056099811 +0000 UTC m=+4.052037198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.859968 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb64c6dae08 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:49.057533448 +0000 UTC m=+4.053470835,LastTimestamp:2026-02-26 08:12:49.057533448 +0000 UTC m=+4.053470835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.864299 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb64d3405e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:49.070532069 +0000 UTC m=+4.066469466,LastTimestamp:2026-02-26 08:12:49.070532069 +0000 UTC m=+4.066469466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.869966 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb64d3c42e1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:49.071071969 +0000 UTC m=+4.067009366,LastTimestamp:2026-02-26 08:12:49.071071969 +0000 UTC m=+4.067009366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.883240 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb67cd46d4f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:49.869573455 +0000 UTC m=+4.865510872,LastTimestamp:2026-02-26 08:12:49.869573455 +0000 UTC m=+4.865510872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.890080 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb68c5409d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.129594837 +0000 UTC m=+5.125532254,LastTimestamp:2026-02-26 08:12:50.129594837 +0000 UTC m=+5.125532254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.896587 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb68d159302 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.142278402 +0000 UTC m=+5.138215829,LastTimestamp:2026-02-26 08:12:50.142278402 +0000 UTC m=+5.138215829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.902474 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb68d2e8eca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.143915722 +0000 UTC m=+5.139853139,LastTimestamp:2026-02-26 08:12:50.143915722 +0000 UTC m=+5.139853139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.907512 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb69d61ad22 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.415701282 +0000 UTC m=+5.411638669,LastTimestamp:2026-02-26 08:12:50.415701282 +0000 UTC m=+5.411638669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.912750 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb69e7a1f29 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.434080553 +0000 UTC m=+5.430017940,LastTimestamp:2026-02-26 08:12:50.434080553 +0000 UTC m=+5.430017940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.919486 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb69e986f94 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.43606722 +0000 UTC m=+5.432004597,LastTimestamp:2026-02-26 08:12:50.43606722 +0000 UTC m=+5.432004597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.925841 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6adbe03b4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.690188212 +0000 UTC m=+5.686125639,LastTimestamp:2026-02-26 08:12:50.690188212 +0000 UTC m=+5.686125639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.932951 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6aeb80f70 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.706575216 +0000 UTC m=+5.702512643,LastTimestamp:2026-02-26 08:12:50.706575216 +0000 UTC m=+5.702512643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.939532 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6aecee720 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.708072224 +0000 UTC m=+5.704009651,LastTimestamp:2026-02-26 08:12:50.708072224 +0000 UTC m=+5.704009651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.945488 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6bdfd0897 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.962753687 +0000 UTC m=+5.958691114,LastTimestamp:2026-02-26 08:12:50.962753687 +0000 UTC m=+5.958691114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.949803 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6bef7c4cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.979185871 +0000 UTC m=+5.975123298,LastTimestamp:2026-02-26 08:12:50.979185871 +0000 UTC m=+5.975123298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.953712 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6bf160af4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:50.981169908 +0000 UTC m=+5.977107335,LastTimestamp:2026-02-26 08:12:50.981169908 +0000 UTC m=+5.977107335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.958275 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6ce0ca71b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:51.232212763 +0000 UTC m=+6.228150190,LastTimestamp:2026-02-26 08:12:51.232212763 +0000 UTC m=+6.228150190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.961990 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897bdb6cf10e5c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:51.24926816 +0000 UTC m=+6.245205587,LastTimestamp:2026-02-26 08:12:51.24926816 +0000 UTC m=+6.245205587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.967712 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 08:13:10 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-controller-manager-crc.1897bdb86529ebcc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 08:13:10 crc kubenswrapper[4741]: body: Feb 26 08:13:10 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:58.062457804 +0000 UTC m=+13.058395231,LastTimestamp:2026-02-26 08:12:58.062457804 +0000 UTC m=+13.058395231,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:10 crc kubenswrapper[4741]: > Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.971090 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb8652b9da3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:58.062568867 +0000 UTC m=+13.058506294,LastTimestamp:2026-02-26 08:12:58.062568867 +0000 UTC m=+13.058506294,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.975513 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 08:13:10 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-apiserver-crc.1897bdb8eafc5d5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 08:13:10 crc kubenswrapper[4741]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 08:13:10 crc kubenswrapper[4741]: Feb 26 08:13:10 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.307619166 +0000 UTC m=+15.303556573,LastTimestamp:2026-02-26 08:13:00.307619166 +0000 UTC m=+15.303556573,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:10 crc kubenswrapper[4741]: > Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.981281 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb8eafd5d96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.307684758 +0000 UTC m=+15.303622175,LastTimestamp:2026-02-26 08:13:00.307684758 +0000 UTC m=+15.303622175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.987472 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897bdb8eafc5d5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 08:13:10 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-apiserver-crc.1897bdb8eafc5d5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 08:13:10 crc kubenswrapper[4741]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 08:13:10 crc kubenswrapper[4741]: Feb 26 08:13:10 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.307619166 +0000 UTC m=+15.303556573,LastTimestamp:2026-02-26 08:13:00.319633692 +0000 UTC m=+15.315571089,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:10 crc kubenswrapper[4741]: > Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.993256 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897bdb8eafd5d96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb8eafd5d96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.307684758 +0000 UTC m=+15.303622175,LastTimestamp:2026-02-26 08:13:00.319695524 +0000 UTC m=+15.315632921,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:10 crc kubenswrapper[4741]: E0226 08:13:10.999391 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 08:13:10 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-apiserver-crc.1897bdb8fce6475b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 26 08:13:10 crc kubenswrapper[4741]: body: [+]ping ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]log ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]etcd ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-filter ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-apiextensions-informers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-apiextensions-controllers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/crd-informer-synced ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-system-namespaces-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 26 08:13:10 crc kubenswrapper[4741]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/bootstrap-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/start-kube-aggregator-informers ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-registration-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-discovery-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]autoregister-completion ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-openapi-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 26 08:13:10 crc kubenswrapper[4741]: livez check failed Feb 26 08:13:10 crc kubenswrapper[4741]: Feb 26 08:13:10 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.608161627 +0000 UTC m=+15.604099024,LastTimestamp:2026-02-26 08:13:00.608161627 +0000 UTC m=+15.604099024,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:10 crc kubenswrapper[4741]: > Feb 26 08:13:11 crc kubenswrapper[4741]: E0226 08:13:11.006338 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb8fce73f90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:00.608225168 +0000 UTC m=+15.604162565,LastTimestamp:2026-02-26 08:13:00.608225168 +0000 UTC m=+15.604162565,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:11 crc kubenswrapper[4741]: E0226 08:13:11.011900 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897bdb63eefc434\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897bdb63eefc434 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:48.83117778 +0000 UTC m=+3.827115177,LastTimestamp:2026-02-26 08:13:00.924276275 +0000 UTC m=+15.920213702,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:11 crc kubenswrapper[4741]: E0226 08:13:11.020841 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 08:13:11 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab943994f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 08:13:11 crc kubenswrapper[4741]: body: Feb 26 08:13:11 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063361359 +0000 UTC m=+23.059298786,LastTimestamp:2026-02-26 08:13:08.063361359 +0000 UTC m=+23.059298786,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:11 crc kubenswrapper[4741]: > Feb 26 08:13:11 crc kubenswrapper[4741]: E0226 08:13:11.027934 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab944c53d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063438141 +0000 UTC m=+23.059375568,LastTimestamp:2026-02-26 08:13:08.063438141 +0000 UTC m=+23.059375568,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:11 crc kubenswrapper[4741]: W0226 08:13:11.303488 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 08:13:11 crc kubenswrapper[4741]: E0226 08:13:11.303678 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:11 crc kubenswrapper[4741]: I0226 08:13:11.702806 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:12 crc kubenswrapper[4741]: W0226 08:13:12.238572 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:12 crc kubenswrapper[4741]: E0226 08:13:12.239018 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:12 crc kubenswrapper[4741]: W0226 08:13:12.330775 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 26 08:13:12 crc kubenswrapper[4741]: E0226 08:13:12.330865 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:12 crc kubenswrapper[4741]: I0226 08:13:12.695811 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.700744 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.725510 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:13 crc kubenswrapper[4741]: E0226 08:13:13.727658 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.727731 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.727794 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.727813 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:13 crc kubenswrapper[4741]: I0226 08:13:13.727855 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:13 crc kubenswrapper[4741]: E0226 08:13:13.735057 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:14 crc kubenswrapper[4741]: I0226 08:13:14.700718 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:15 crc kubenswrapper[4741]: I0226 08:13:15.701569 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:15 crc kubenswrapper[4741]: E0226 08:13:15.872652 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:16 crc kubenswrapper[4741]: I0226 08:13:16.702982 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:17 crc kubenswrapper[4741]: I0226 08:13:17.700089 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.062956 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.063089 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.063225 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.063454 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.066425 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.066523 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.066548 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.067374 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.067644 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da" gracePeriod=30 Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.075147 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdbab943994f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 08:13:18 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab943994f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 08:13:18 crc kubenswrapper[4741]: body: Feb 26 08:13:18 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063361359 +0000 UTC m=+23.059298786,LastTimestamp:2026-02-26 08:13:18.063041079 +0000 UTC m=+33.058978496,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:18 crc kubenswrapper[4741]: > Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.083721 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdbab944c53d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab944c53d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063438141 +0000 UTC m=+23.059375568,LastTimestamp:2026-02-26 08:13:18.063176463 +0000 UTC m=+33.059113890,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.093874 4741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdbd0d906591 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:18.067615121 +0000 UTC m=+33.063552548,LastTimestamp:2026-02-26 08:13:18.067615121 +0000 UTC m=+33.063552548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.201677 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdb5d22be79f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5d22be79f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.006402463 +0000 UTC m=+2.002339890,LastTimestamp:2026-02-26 08:13:18.194153163 +0000 UTC m=+33.190090580,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.432509 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdb5e9dd5239\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5e9dd5239 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.403905593 +0000 UTC m=+2.399843020,LastTimestamp:2026-02-26 08:13:18.430372292 +0000 UTC m=+33.426309709,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:18 crc kubenswrapper[4741]: E0226 08:13:18.448833 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdb5eadf6c61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdb5eadf6c61 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:12:47.420820577 +0000 UTC m=+2.416757964,LastTimestamp:2026-02-26 08:13:18.446648921 +0000 UTC m=+33.442586348,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.701331 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.997803 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.998496 4741 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da" exitCode=255 Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.998569 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da"} Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.998618 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc"} Feb 26 08:13:18 crc kubenswrapper[4741]: I0226 08:13:18.998779 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:19 crc kubenswrapper[4741]: I0226 08:13:19.000021 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:19 crc kubenswrapper[4741]: I0226 08:13:19.000088 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:19 crc kubenswrapper[4741]: I0226 08:13:19.000144 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:19 crc kubenswrapper[4741]: I0226 08:13:19.701168 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.702519 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:20 crc kubenswrapper[4741]: E0226 08:13:20.734298 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.735234 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.737096 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.737215 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.737244 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.737295 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:20 crc kubenswrapper[4741]: E0226 08:13:20.745278 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.787051 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.789506 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.789586 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.789606 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:20 crc kubenswrapper[4741]: I0226 08:13:20.790825 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:21 crc kubenswrapper[4741]: I0226 08:13:21.695292 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.011884 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.012873 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.015700 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" exitCode=255 Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.015761 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1"} Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.015817 4741 scope.go:117] "RemoveContainer" containerID="3a6c33815404903d354313bc6d721c5e93b892877f87b618d10912189527b6db" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.016045 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.017655 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.017696 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.017708 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.019856 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:22 crc kubenswrapper[4741]: E0226 08:13:22.020228 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:22 crc kubenswrapper[4741]: I0226 08:13:22.702139 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.022188 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.298948 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.299244 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.301671 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.301734 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.301756 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.302568 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:23 crc kubenswrapper[4741]: E0226 08:13:23.302856 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.699561 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:23 crc kubenswrapper[4741]: I0226 08:13:23.712927 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.029068 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.030621 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.030717 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.030738 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.031894 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:24 crc kubenswrapper[4741]: E0226 08:13:24.032309 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.093290 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.093527 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.095219 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.095268 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.095277 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:24 crc kubenswrapper[4741]: I0226 08:13:24.701868 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.062562 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.063591 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.065370 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.065446 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.065463 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:25 crc kubenswrapper[4741]: I0226 08:13:25.699335 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:25 crc kubenswrapper[4741]: E0226 08:13:25.872842 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:26 crc kubenswrapper[4741]: I0226 08:13:26.701821 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:27 crc kubenswrapper[4741]: W0226 08:13:27.029644 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 08:13:27 crc kubenswrapper[4741]: E0226 08:13:27.029727 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.701579 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:27 crc kubenswrapper[4741]: E0226 08:13:27.742270 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.746372 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.748057 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.748174 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.748201 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:27 crc kubenswrapper[4741]: I0226 08:13:27.748250 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:27 crc kubenswrapper[4741]: E0226 08:13:27.755788 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:28 crc kubenswrapper[4741]: I0226 08:13:28.063564 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:13:28 crc kubenswrapper[4741]: I0226 08:13:28.063678 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:13:28 crc kubenswrapper[4741]: E0226 08:13:28.071971 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdbab943994f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 08:13:28 crc kubenswrapper[4741]: &Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab943994f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 08:13:28 crc kubenswrapper[4741]: body: Feb 26 08:13:28 crc kubenswrapper[4741]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063361359 +0000 UTC m=+23.059298786,LastTimestamp:2026-02-26 08:13:28.063645927 +0000 UTC m=+43.059583354,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 08:13:28 crc kubenswrapper[4741]: > Feb 26 08:13:28 crc kubenswrapper[4741]: E0226 08:13:28.079757 4741 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897bdbab944c53d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897bdbab944c53d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:13:08.063438141 +0000 UTC m=+23.059375568,LastTimestamp:2026-02-26 08:13:28.063722549 +0000 UTC m=+43.059659976,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:13:28 crc kubenswrapper[4741]: I0226 08:13:28.702368 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:29 crc kubenswrapper[4741]: I0226 08:13:29.701608 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:30 crc kubenswrapper[4741]: W0226 08:13:30.511648 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 26 08:13:30 crc kubenswrapper[4741]: E0226 08:13:30.512181 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:30 crc kubenswrapper[4741]: I0226 08:13:30.703099 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:31 crc kubenswrapper[4741]: I0226 08:13:31.699834 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:32 crc kubenswrapper[4741]: I0226 08:13:32.698805 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:33 crc kubenswrapper[4741]: W0226 08:13:33.672166 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:33 crc kubenswrapper[4741]: E0226 08:13:33.672278 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.699863 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.950999 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.951322 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.953182 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.953268 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:33 crc kubenswrapper[4741]: I0226 08:13:33.953312 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.695895 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:34 crc kubenswrapper[4741]: E0226 08:13:34.748599 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.756670 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.758439 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.758507 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.758529 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.758599 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:34 crc kubenswrapper[4741]: E0226 08:13:34.772737 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.786427 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.788276 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.788345 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.788364 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:34 crc kubenswrapper[4741]: I0226 08:13:34.789622 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:34 crc kubenswrapper[4741]: E0226 08:13:34.789936 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.071843 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.072511 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.074170 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.074221 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.074235 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.078665 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:13:35 crc kubenswrapper[4741]: I0226 08:13:35.700882 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:35 crc kubenswrapper[4741]: E0226 08:13:35.873194 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:36 crc kubenswrapper[4741]: I0226 08:13:36.067432 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:36 crc kubenswrapper[4741]: I0226 08:13:36.068991 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:36 crc kubenswrapper[4741]: I0226 08:13:36.069059 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:36 crc kubenswrapper[4741]: I0226 08:13:36.069082 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:36 crc kubenswrapper[4741]: I0226 08:13:36.701517 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:37 crc kubenswrapper[4741]: W0226 08:13:37.667866 4741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 26 08:13:37 crc kubenswrapper[4741]: E0226 08:13:37.667948 4741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 08:13:37 crc kubenswrapper[4741]: I0226 08:13:37.700784 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:38 crc kubenswrapper[4741]: I0226 08:13:38.698909 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:39 crc kubenswrapper[4741]: I0226 08:13:39.700231 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:40 crc kubenswrapper[4741]: I0226 08:13:40.700754 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.700005 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:41 crc kubenswrapper[4741]: E0226 08:13:41.754308 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.774331 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.775572 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.775629 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.775642 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:41 crc kubenswrapper[4741]: I0226 08:13:41.775672 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:41 crc kubenswrapper[4741]: E0226 08:13:41.781194 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:42 crc kubenswrapper[4741]: I0226 08:13:42.698963 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:43 crc kubenswrapper[4741]: I0226 08:13:43.700677 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:44 crc kubenswrapper[4741]: I0226 08:13:44.700256 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.700846 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.786903 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.788348 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.788391 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.788406 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:45 crc kubenswrapper[4741]: I0226 08:13:45.789263 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:45 crc kubenswrapper[4741]: E0226 08:13:45.874460 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:46 crc kubenswrapper[4741]: I0226 08:13:46.698631 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.101638 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.103202 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.106003 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" exitCode=255 Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.106069 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6"} Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.106152 4741 scope.go:117] "RemoveContainer" containerID="c9275d180ce5f3d9499e3175dacf2fd63a4ceea8a481271da0fc49b6831a0fa1" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.106379 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.108251 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.108311 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.108330 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.109353 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:13:47 crc kubenswrapper[4741]: E0226 08:13:47.109771 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:47 crc kubenswrapper[4741]: I0226 08:13:47.698341 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.112325 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.700728 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:48 crc kubenswrapper[4741]: E0226 08:13:48.762170 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.782626 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.789605 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.789660 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.789674 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:48 crc kubenswrapper[4741]: I0226 08:13:48.789702 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:48 crc kubenswrapper[4741]: E0226 08:13:48.796706 4741 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 08:13:49 crc kubenswrapper[4741]: I0226 08:13:49.700913 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:50 crc kubenswrapper[4741]: I0226 08:13:50.700040 4741 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 08:13:50 crc kubenswrapper[4741]: I0226 08:13:50.804654 4741 csr.go:261] certificate signing request csr-fjhzc is approved, waiting to be issued Feb 26 08:13:50 crc kubenswrapper[4741]: I0226 08:13:50.818103 4741 csr.go:257] certificate signing request csr-fjhzc is issued Feb 26 08:13:50 crc kubenswrapper[4741]: I0226 08:13:50.931028 4741 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 08:13:51 crc kubenswrapper[4741]: I0226 08:13:51.520302 4741 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 08:13:51 crc kubenswrapper[4741]: I0226 08:13:51.819508 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-24 12:50:41.228675303 +0000 UTC Feb 26 08:13:51 crc kubenswrapper[4741]: I0226 08:13:51.819564 4741 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7228h36m49.409115519s for next certificate rotation Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.299423 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.299670 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.301130 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.301186 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.301199 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.302053 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:13:53 crc kubenswrapper[4741]: E0226 08:13:53.302250 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:53 crc kubenswrapper[4741]: I0226 08:13:53.713284 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:13:54 crc kubenswrapper[4741]: I0226 08:13:54.157183 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:54 crc kubenswrapper[4741]: I0226 08:13:54.158094 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:54 crc kubenswrapper[4741]: I0226 08:13:54.158148 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:54 crc kubenswrapper[4741]: I0226 08:13:54.158157 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:54 crc kubenswrapper[4741]: I0226 08:13:54.158713 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:13:54 crc kubenswrapper[4741]: E0226 08:13:54.158871 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.797201 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.798798 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.798864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.798883 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.799044 4741 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.812290 4741 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.812651 4741 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.812685 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.817907 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.817968 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.817989 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.818013 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.818041 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:13:55Z","lastTransitionTime":"2026-02-26T08:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.844607 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.856048 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.856095 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.856145 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.856174 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.856196 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:13:55Z","lastTransitionTime":"2026-02-26T08:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.869716 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.874860 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.881198 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.881238 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.881256 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.881276 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.881296 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:13:55Z","lastTransitionTime":"2026-02-26T08:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.899451 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.909833 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.909893 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.909908 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.909928 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:13:55 crc kubenswrapper[4741]: I0226 08:13:55.909944 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:13:55Z","lastTransitionTime":"2026-02-26T08:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.926302 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.926561 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:13:55 crc kubenswrapper[4741]: E0226 08:13:55.926600 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.027570 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.127839 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.228773 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.328853 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.429199 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.529368 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.629592 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.730339 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.830941 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:56 crc kubenswrapper[4741]: E0226 08:13:56.932232 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.033039 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.133909 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.234175 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.335197 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.436324 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.537451 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.638392 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.739337 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.839767 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:57 crc kubenswrapper[4741]: E0226 08:13:57.940920 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.041051 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.141945 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.242697 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.343573 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.443702 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: I0226 08:13:58.494261 4741 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.544588 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.645784 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.746143 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: I0226 08:13:58.786838 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:13:58 crc kubenswrapper[4741]: I0226 08:13:58.788469 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:13:58 crc kubenswrapper[4741]: I0226 08:13:58.788559 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:13:58 crc kubenswrapper[4741]: I0226 08:13:58.788583 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.846657 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:58 crc kubenswrapper[4741]: E0226 08:13:58.947336 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.048180 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.148640 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.249791 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.350949 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.451921 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.552054 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.652633 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.752966 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.853680 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:13:59 crc kubenswrapper[4741]: E0226 08:13:59.954268 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.054733 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.155135 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.255787 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.367139 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.467660 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.568332 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.669499 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.770425 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.870998 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:00 crc kubenswrapper[4741]: E0226 08:14:00.971956 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.072590 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.173952 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.274727 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.375372 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.476041 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.576619 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.677668 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.778187 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.878649 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:01 crc kubenswrapper[4741]: E0226 08:14:01.979826 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.080242 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.180346 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.281094 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.381960 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.483044 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.583516 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.684287 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.785263 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.886424 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:02 crc kubenswrapper[4741]: E0226 08:14:02.986763 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.086991 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.188062 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.288551 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.389710 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.490843 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.591331 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.691976 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.793128 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.893831 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:03 crc kubenswrapper[4741]: E0226 08:14:03.994913 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.095087 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.195821 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.296730 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.397777 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.498569 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.599008 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.699564 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.799943 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:04 crc kubenswrapper[4741]: E0226 08:14:04.900065 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.001251 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.101379 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.201499 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.302411 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.402644 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.502953 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.603785 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.704481 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: I0226 08:14:05.786741 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:14:05 crc kubenswrapper[4741]: I0226 08:14:05.788457 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:05 crc kubenswrapper[4741]: I0226 08:14:05.788516 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:05 crc kubenswrapper[4741]: I0226 08:14:05.788539 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:05 crc kubenswrapper[4741]: I0226 08:14:05.789669 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.789964 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.805069 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.875557 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:14:05 crc kubenswrapper[4741]: E0226 08:14:05.906012 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.006555 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.063807 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.069031 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.069087 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.069105 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.069165 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.069185 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:06Z","lastTransitionTime":"2026-02-26T08:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.084899 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.089837 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.089903 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.089926 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.089953 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.089970 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:06Z","lastTransitionTime":"2026-02-26T08:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.104966 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.109976 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.110027 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.110047 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.110072 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.110092 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:06Z","lastTransitionTime":"2026-02-26T08:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.125805 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.133267 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.133389 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.133425 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.133451 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:06 crc kubenswrapper[4741]: I0226 08:14:06.133475 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:06Z","lastTransitionTime":"2026-02-26T08:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.155037 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.155397 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.155476 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.256189 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.356616 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.457624 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.557944 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.658506 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.759439 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.860039 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:06 crc kubenswrapper[4741]: E0226 08:14:06.960225 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.060793 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.162029 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.263223 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.363470 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.464216 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.564885 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.665982 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.766640 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.867329 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:07 crc kubenswrapper[4741]: E0226 08:14:07.967826 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.068523 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.169376 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.270177 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.370306 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.471215 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.572184 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.672277 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.773411 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.874441 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:08 crc kubenswrapper[4741]: E0226 08:14:08.975003 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.075697 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: I0226 08:14:09.145241 4741 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.176344 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.277198 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.378147 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.479146 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.579526 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.680667 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.781479 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.881712 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:09 crc kubenswrapper[4741]: E0226 08:14:09.982236 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.082762 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.183684 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.283991 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.385218 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.486383 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.587494 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.687782 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.788747 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.889627 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:10 crc kubenswrapper[4741]: E0226 08:14:10.990736 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.091822 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.193016 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.294200 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.394966 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.496008 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.597281 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.697422 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.798497 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.899247 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:11 crc kubenswrapper[4741]: E0226 08:14:11.999397 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.099519 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.200041 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.300349 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.401042 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.501884 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.602150 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.702314 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.803277 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:12 crc kubenswrapper[4741]: E0226 08:14:12.904018 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.004760 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.105205 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.205567 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.306132 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.406757 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.507462 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.607762 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.708257 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: I0226 08:14:13.786758 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:14:13 crc kubenswrapper[4741]: I0226 08:14:13.788597 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:13 crc kubenswrapper[4741]: I0226 08:14:13.788646 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:13 crc kubenswrapper[4741]: I0226 08:14:13.788660 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.809323 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:13 crc kubenswrapper[4741]: E0226 08:14:13.910569 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.011084 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.112843 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.213387 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.313929 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.414343 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.515389 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.616129 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.716713 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.817455 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:14 crc kubenswrapper[4741]: E0226 08:14:14.917610 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.019131 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.120244 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.220830 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.322292 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.422802 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.524195 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.624999 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.725833 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.827489 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.876013 4741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 08:14:15 crc kubenswrapper[4741]: E0226 08:14:15.927850 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.029210 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.130334 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.230933 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.230998 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.235889 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.235929 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.235941 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.235960 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.235974 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:16Z","lastTransitionTime":"2026-02-26T08:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.250014 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.254628 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.254658 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.254670 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.254687 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.254699 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:16Z","lastTransitionTime":"2026-02-26T08:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.265551 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.268963 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.269413 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.269624 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.269771 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.269898 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:16Z","lastTransitionTime":"2026-02-26T08:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.282545 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.288324 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.288437 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.288469 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.288501 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:16 crc kubenswrapper[4741]: I0226 08:14:16.288524 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:16Z","lastTransitionTime":"2026-02-26T08:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.304694 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.304933 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.331413 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.431906 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.532954 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.633735 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.733993 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.834697 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:16 crc kubenswrapper[4741]: E0226 08:14:16.936016 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.036869 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.138530 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.238842 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.338999 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.439676 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.540647 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.640812 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.741442 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.842375 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:17 crc kubenswrapper[4741]: E0226 08:14:17.943775 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.044197 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.145672 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.246011 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.346705 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.447865 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.548594 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.649509 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.750047 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: I0226 08:14:18.786464 4741 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 08:14:18 crc kubenswrapper[4741]: I0226 08:14:18.788240 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:18 crc kubenswrapper[4741]: I0226 08:14:18.788503 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:18 crc kubenswrapper[4741]: I0226 08:14:18.788700 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:18 crc kubenswrapper[4741]: I0226 08:14:18.790521 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.791185 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.850385 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:18 crc kubenswrapper[4741]: E0226 08:14:18.950831 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.051344 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.152190 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.252756 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.353536 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.454554 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.555000 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.656207 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.757389 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.857550 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:19 crc kubenswrapper[4741]: E0226 08:14:19.957929 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.058691 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.159569 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.261124 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.361352 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.461459 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.562131 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.663310 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.764437 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.864615 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:20 crc kubenswrapper[4741]: E0226 08:14:20.965852 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.066661 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.167378 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.267922 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.368769 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.469263 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.570099 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.671171 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.772144 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.872520 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:21 crc kubenswrapper[4741]: E0226 08:14:21.973316 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.074237 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.174792 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.275721 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.376037 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.476707 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.577878 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: I0226 08:14:22.580624 4741 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.678929 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.779947 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.881073 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:22 crc kubenswrapper[4741]: E0226 08:14:22.981628 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.082743 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.183785 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.284099 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.384477 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.485296 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.585678 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.686638 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.787164 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.888259 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:23 crc kubenswrapper[4741]: E0226 08:14:23.988733 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.088919 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.189486 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.289757 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.390759 4741 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.454298 4741 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.495090 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.495526 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.495699 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.495867 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.496010 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:24Z","lastTransitionTime":"2026-02-26T08:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.599045 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.599399 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.599501 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.599864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.599976 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:24Z","lastTransitionTime":"2026-02-26T08:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.703590 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.703653 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.703669 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.703692 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.703712 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:24Z","lastTransitionTime":"2026-02-26T08:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.733922 4741 apiserver.go:52] "Watching apiserver" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.767943 4741 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.768748 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bjwp7","openshift-machine-config-operator/machine-config-daemon-zqf2s","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt","openshift-multus/multus-additional-cni-plugins-f5qkr","openshift-multus/multus-mzt8d","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-2w5nl","openshift-image-registry/node-ca-869lw","openshift-multus/network-metrics-daemon-zlfsg","openshift-network-operator/iptables-alerter-4ln5h"] Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.769578 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.769711 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.769874 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.769868 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.770010 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.770210 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.770817 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.772216 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.772327 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.772788 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.773236 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.773357 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.773556 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.773242 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.773763 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.775305 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.775711 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.775843 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.776002 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.776043 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.776181 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.776221 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.776518 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.775940 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.776837 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.777060 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.778057 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.778087 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.782640 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.783184 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.784490 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.785192 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.785420 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.786277 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.786762 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.788022 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.788553 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.788674 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.789485 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.790014 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.790738 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.791266 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.791463 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.791850 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.792590 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.793005 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.793317 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.793972 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.794479 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.794899 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.795431 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.795576 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.795690 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.795958 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.804273 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.808962 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.809019 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.809034 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.809058 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.809102 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:24Z","lastTransitionTime":"2026-02-26T08:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.819924 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.820974 4741 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.831573 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.841787 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.853584 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.865363 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.874693 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876190 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876245 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876276 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876476 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876498 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876521 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876546 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876570 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876594 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876617 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876639 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876665 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876736 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876766 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876808 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876877 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876911 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876947 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876974 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877001 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877025 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877051 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877074 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877096 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877147 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877170 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877195 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877216 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877242 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877265 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877286 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877310 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877333 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877355 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877379 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877403 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877426 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877450 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877472 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877496 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877520 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877541 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877562 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877582 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877602 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877626 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877648 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877668 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877693 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877717 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877739 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877794 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877844 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877867 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877890 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877914 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877947 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877980 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878007 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878029 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878053 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878076 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878100 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878139 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878164 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878189 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878211 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878236 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878265 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878317 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878342 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878407 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878432 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878455 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878477 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878500 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878523 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878547 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878571 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878595 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878621 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878645 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878689 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878712 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878736 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878758 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878782 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878804 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878828 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878855 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878879 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878924 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.876677 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878964 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878994 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879017 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877147 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879041 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879072 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879094 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879137 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879160 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879187 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879213 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879237 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879287 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879317 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879342 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879365 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879389 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879414 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879469 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879498 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879524 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879550 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879574 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879596 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879622 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879643 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879671 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879696 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879719 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879742 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879777 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879802 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879827 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879848 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879873 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879898 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879930 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879962 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880010 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880033 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880059 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880088 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880132 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880171 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880202 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880226 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880259 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880285 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880310 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880335 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880359 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880382 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880410 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880434 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880460 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880485 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880508 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880532 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880559 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880583 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880611 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880638 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880664 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880694 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880718 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880745 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880792 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880820 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880846 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880871 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880894 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880919 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880953 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880987 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881022 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881049 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881075 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881190 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881227 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881296 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881324 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881347 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881370 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881399 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881427 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881451 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881477 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881509 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881532 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881557 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881582 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881639 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881667 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881692 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881717 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881742 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881766 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881790 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881812 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881839 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881862 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881889 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881987 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-etc-kubernetes\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882022 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c7b5b01-4061-4003-b002-a977260886c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882046 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882070 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/81747676-7ef1-403b-8315-96e475f06342-serviceca\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882094 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882138 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-multus-daemon-config\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882160 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882183 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882215 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882242 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-socket-dir-parent\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882269 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882295 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c7b5b01-4061-4003-b002-a977260886c5-rootfs\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882368 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-system-cni-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882442 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-cnibin\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882513 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-binary-copy\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882607 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882643 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbm9\" (UniqueName: \"kubernetes.io/projected/3fd732e7-0e36-485f-b750-856d6869e697-kube-api-access-wlbm9\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882681 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882713 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81747676-7ef1-403b-8315-96e475f06342-host\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882748 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882783 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882817 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-conf-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882848 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882878 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882937 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.882974 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-kubelet\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.883006 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-multus-certs\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884305 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884351 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwdl\" (UniqueName: \"kubernetes.io/projected/d52383bb-5c8d-4eef-9df8-93143a9326d9-kube-api-access-bpwdl\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884387 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884424 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbcz\" (UniqueName: \"kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884468 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884505 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv9p\" (UniqueName: \"kubernetes.io/projected/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-kube-api-access-4nv9p\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884533 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-multus\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884562 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884595 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884623 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-bin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884651 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884680 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884712 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884743 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884773 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqn2s\" (UniqueName: \"kubernetes.io/projected/f2840647-3181-4a32-9386-b7f030bb9356-kube-api-access-mqn2s\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884797 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884823 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884848 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884873 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrxr\" (UniqueName: \"kubernetes.io/projected/81747676-7ef1-403b-8315-96e475f06342-kube-api-access-rvrxr\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884899 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884955 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-system-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884984 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-os-release\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885010 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-hostroot\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885033 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885056 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885086 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885136 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-k8s-cni-cncf-io\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885163 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-cnibin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885186 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-cni-binary-copy\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885214 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c7b5b01-4061-4003-b002-a977260886c5-proxy-tls\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885238 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-os-release\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885291 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5lv\" (UniqueName: \"kubernetes.io/projected/2c7b5b01-4061-4003-b002-a977260886c5-kube-api-access-sz5lv\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885313 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885335 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885364 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885395 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885424 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885453 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9pr\" (UniqueName: \"kubernetes.io/projected/081d7d48-c4b9-4725-bd12-32a95c02133f-kube-api-access-hj9pr\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885478 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-netns\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885506 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885532 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885555 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885581 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885605 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885632 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d52383bb-5c8d-4eef-9df8-93143a9326d9-hosts-file\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885714 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885735 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887374 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877510 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877858 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877893 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.877947 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878023 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878305 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878513 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878633 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878646 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.878698 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879174 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879506 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879538 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879671 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887745 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880004 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880043 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880042 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880008 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880084 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.879989 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880403 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880408 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880791 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.880785 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881077 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.881104 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884468 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884663 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884989 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.884858 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885060 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885095 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885408 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885428 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885487 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885540 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885548 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885690 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885721 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885759 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885884 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.885948 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.886278 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.886345 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.886969 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887103 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887142 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887505 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.887790 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.888078 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.888375 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.888603 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.893417 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.393389187 +0000 UTC m=+100.389326584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.892221 4741 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.892630 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.898518 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.898717 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.398693065 +0000 UTC m=+100.394630472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.902579 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.902699 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.402646814 +0000 UTC m=+100.398584211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.906887 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.907278 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.907509 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.907818 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.908285 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.908724 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.908840 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.909259 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.909556 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.909820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.909821 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.909983 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.910243 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.910284 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.910315 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.910328 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.910337 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.910449 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.41042335 +0000 UTC m=+100.406360777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.910775 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.910986 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.911247 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.911476 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.914267 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.914437 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.915176 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.916978 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.917437 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.920304 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.920543 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.920609 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.920673 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.920843 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921491 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921517 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921598 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921632 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921816 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.921875 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.922482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.923732 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924013 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924391 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924454 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924515 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924737 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.924909 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.925277 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.925321 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.925383 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.925885 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926028 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926172 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926202 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926440 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926484 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.926882 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927178 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927530 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927633 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927680 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927778 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927785 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.927803 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.928097 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.928304 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.928738 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.928760 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.928828 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.929362 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.929485 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.929535 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:24 crc kubenswrapper[4741]: E0226 08:14:24.929621 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.429597633 +0000 UTC m=+100.425535060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930178 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930249 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930286 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930297 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930311 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930323 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:24Z","lastTransitionTime":"2026-02-26T08:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930390 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930557 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930626 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.930670 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.931592 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.931520 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933221 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933383 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933481 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933693 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933696 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933809 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.934053 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.934643 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.933722 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.935070 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.935245 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.935276 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.935672 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.935704 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936143 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936271 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936535 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936563 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936601 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936742 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936553 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.936995 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937010 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937372 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937438 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937577 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937018 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937724 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937679 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937759 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.937730 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938046 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938081 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938182 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938181 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938255 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938376 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938390 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938570 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938763 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.938770 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.941043 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.941331 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.941450 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.941588 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.941954 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942165 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942192 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942222 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942247 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942335 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942380 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942415 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942432 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942454 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942495 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942504 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942497 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942674 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942720 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.942850 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943021 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943018 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943164 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943246 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943416 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.943470 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.944600 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.944689 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.946404 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.946464 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.946794 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.947101 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.954331 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.959818 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.967038 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.968972 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.977696 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.981987 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987229 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-netns\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987300 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987336 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987370 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987422 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987440 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987399 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-netns\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987489 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987522 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987564 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d52383bb-5c8d-4eef-9df8-93143a9326d9-hosts-file\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987602 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987668 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d52383bb-5c8d-4eef-9df8-93143a9326d9-hosts-file\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.987645 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c7b5b01-4061-4003-b002-a977260886c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.991948 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.992194 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.992372 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.992500 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.995593 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.995703 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/81747676-7ef1-403b-8315-96e475f06342-serviceca\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996789 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996832 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-multus-daemon-config\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996862 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-etc-kubernetes\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996890 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c7b5b01-4061-4003-b002-a977260886c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996926 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-socket-dir-parent\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996951 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.996985 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c7b5b01-4061-4003-b002-a977260886c5-rootfs\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997007 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-system-cni-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997041 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-binary-copy\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997064 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997087 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbm9\" (UniqueName: \"kubernetes.io/projected/3fd732e7-0e36-485f-b750-856d6869e697-kube-api-access-wlbm9\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997141 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-cnibin\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997167 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997191 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-conf-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997209 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c7b5b01-4061-4003-b002-a977260886c5-rootfs\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997228 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997244 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-cnibin\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997256 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997274 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81747676-7ef1-403b-8315-96e475f06342-host\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997271 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997359 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81747676-7ef1-403b-8315-96e475f06342-host\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997441 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997487 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-kubelet\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997510 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-multus-certs\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997535 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997559 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997582 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997732 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997757 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-multus-certs\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997784 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-conf-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997800 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-kubelet\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997811 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-multus-socket-dir-parent\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997828 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-system-cni-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-etc-kubernetes\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997862 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997866 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.997951 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998204 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbcz\" (UniqueName: \"kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998512 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv9p\" (UniqueName: \"kubernetes.io/projected/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-kube-api-access-4nv9p\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998516 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/81747676-7ef1-403b-8315-96e475f06342-serviceca\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998558 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-multus\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998582 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998615 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwdl\" (UniqueName: \"kubernetes.io/projected/d52383bb-5c8d-4eef-9df8-93143a9326d9-kube-api-access-bpwdl\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998629 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-multus\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998673 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-bin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998708 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998740 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-var-lib-cni-bin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081d7d48-c4b9-4725-bd12-32a95c02133f-cni-binary-copy\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998812 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqn2s\" (UniqueName: \"kubernetes.io/projected/f2840647-3181-4a32-9386-b7f030bb9356-kube-api-access-mqn2s\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998825 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998846 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998887 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-multus-daemon-config\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.998913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999009 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999036 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999064 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrxr\" (UniqueName: \"kubernetes.io/projected/81747676-7ef1-403b-8315-96e475f06342-kube-api-access-rvrxr\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999077 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999093 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999144 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-system-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999169 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-os-release\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999223 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999345 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999382 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999417 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-os-release\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999428 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999437 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-system-cni-dir\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999500 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:24 crc kubenswrapper[4741]: I0226 08:14:24.999533 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-k8s-cni-cncf-io\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999630 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999639 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-hostroot\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999658 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-host-run-k8s-cni-cncf-io\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999672 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-cni-binary-copy\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999686 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-hostroot\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999701 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c7b5b01-4061-4003-b002-a977260886c5-proxy-tls\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999727 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-os-release\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999754 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999788 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-cnibin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999812 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:24.999838 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000082 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000197 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fd732e7-0e36-485f-b750-856d6869e697-cni-binary-copy\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000239 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-os-release\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000242 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000360 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.000970 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fd732e7-0e36-485f-b750-856d6869e697-cnibin\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001039 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001071 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5lv\" (UniqueName: \"kubernetes.io/projected/2c7b5b01-4061-4003-b002-a977260886c5-kube-api-access-sz5lv\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001119 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001156 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj9pr\" (UniqueName: \"kubernetes.io/projected/081d7d48-c4b9-4725-bd12-32a95c02133f-kube-api-access-hj9pr\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001330 4741 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001282 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001348 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001412 4741 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001435 4741 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001468 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001494 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001516 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001537 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001558 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001577 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001597 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001617 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001636 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.001163 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.001712 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:25.501692816 +0000 UTC m=+100.497630213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001655 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001803 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001825 4741 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001846 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001868 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001890 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001910 4741 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001963 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.001985 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002007 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002028 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002049 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002069 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002088 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002139 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002159 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002178 4741 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002198 4741 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002217 4741 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002240 4741 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002258 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002278 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002307 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002326 4741 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002349 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002368 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002388 4741 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002407 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002427 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002446 4741 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002467 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002487 4741 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002507 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002527 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002546 4741 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002565 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002610 4741 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002631 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002650 4741 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002669 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002688 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002708 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002739 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002763 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002784 4741 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002803 4741 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002823 4741 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002841 4741 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002860 4741 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002879 4741 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002898 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002917 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002937 4741 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.002956 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003004 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003023 4741 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003044 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003062 4741 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003080 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003100 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003141 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003161 4741 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003179 4741 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003198 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003218 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003237 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003953 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003976 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004007 4741 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004026 4741 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004044 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004063 4741 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004080 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004099 4741 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004146 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004167 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004186 4741 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004205 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004224 4741 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004244 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004265 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004286 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004304 4741 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004322 4741 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004345 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004366 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004386 4741 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004406 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003674 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081d7d48-c4b9-4725-bd12-32a95c02133f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004424 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004473 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004516 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004532 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.003904 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004544 4741 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004594 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004613 4741 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004633 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004654 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004673 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004693 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004712 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004732 4741 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004751 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004792 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004811 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004830 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004847 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004887 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004907 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004926 4741 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004945 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004963 4741 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.004992 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005011 4741 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005031 4741 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005049 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005066 4741 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005085 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005102 4741 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005150 4741 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005169 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005194 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005228 4741 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005248 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005268 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005286 4741 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005307 4741 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005326 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005346 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005367 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005386 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005405 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005425 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005444 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005463 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005481 4741 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005500 4741 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005521 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005541 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005560 4741 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005579 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005604 4741 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005623 4741 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005642 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005661 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005659 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c7b5b01-4061-4003-b002-a977260886c5-proxy-tls\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005681 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005739 4741 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005755 4741 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005771 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005790 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005804 4741 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005820 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005832 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005845 4741 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005859 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005870 4741 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005882 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005894 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005907 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005919 4741 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005933 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005949 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005961 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005976 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.005989 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006004 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006019 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006033 4741 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006046 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006058 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006071 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006084 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006095 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006126 4741 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006143 4741 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006155 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006169 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006182 4741 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006194 4741 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.006207 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.011149 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.015620 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv9p\" (UniqueName: \"kubernetes.io/projected/124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b-kube-api-access-4nv9p\") pod \"ovnkube-control-plane-749d76644c-65fqt\" (UID: \"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.017230 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbm9\" (UniqueName: \"kubernetes.io/projected/3fd732e7-0e36-485f-b750-856d6869e697-kube-api-access-wlbm9\") pod \"multus-mzt8d\" (UID: \"3fd732e7-0e36-485f-b750-856d6869e697\") " pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.019413 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrxr\" (UniqueName: \"kubernetes.io/projected/81747676-7ef1-403b-8315-96e475f06342-kube-api-access-rvrxr\") pod \"node-ca-869lw\" (UID: \"81747676-7ef1-403b-8315-96e475f06342\") " pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.020390 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbcz\" (UniqueName: \"kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz\") pod \"ovnkube-node-2w5nl\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.023010 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwdl\" (UniqueName: \"kubernetes.io/projected/d52383bb-5c8d-4eef-9df8-93143a9326d9-kube-api-access-bpwdl\") pod \"node-resolver-bjwp7\" (UID: \"d52383bb-5c8d-4eef-9df8-93143a9326d9\") " pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.026812 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj9pr\" (UniqueName: \"kubernetes.io/projected/081d7d48-c4b9-4725-bd12-32a95c02133f-kube-api-access-hj9pr\") pod \"multus-additional-cni-plugins-f5qkr\" (UID: \"081d7d48-c4b9-4725-bd12-32a95c02133f\") " pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032061 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5lv\" (UniqueName: \"kubernetes.io/projected/2c7b5b01-4061-4003-b002-a977260886c5-kube-api-access-sz5lv\") pod \"machine-config-daemon-zqf2s\" (UID: \"2c7b5b01-4061-4003-b002-a977260886c5\") " pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032806 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032852 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032871 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032897 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.032914 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.033767 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqn2s\" (UniqueName: \"kubernetes.io/projected/f2840647-3181-4a32-9386-b7f030bb9356-kube-api-access-mqn2s\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.105469 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.126074 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.128491 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3fd252546b556d8d9f51c2b6d3f60061effdeb36a870c8471873c6314ae43f42 WatchSource:0}: Error finding container 3fd252546b556d8d9f51c2b6d3f60061effdeb36a870c8471873c6314ae43f42: Status 404 returned error can't find the container with id 3fd252546b556d8d9f51c2b6d3f60061effdeb36a870c8471873c6314ae43f42 Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.131346 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.135894 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.135960 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.135975 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.135994 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.136007 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.137641 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.140793 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8e9e61996767ed844c894047fa6d95965cd6e2d36d475fe23c491128363e188a WatchSource:0}: Error finding container 8e9e61996767ed844c894047fa6d95965cd6e2d36d475fe23c491128363e188a: Status 404 returned error can't find the container with id 8e9e61996767ed844c894047fa6d95965cd6e2d36d475fe23c491128363e188a Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.148203 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.160714 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124d6a5b_7ffa_4684_b0b4_2c6b75d42b6b.slice/crio-57a5e4a4f568e220bf95c5cc5777a501e50f63f483d482fbdd7f493b5f0f956e WatchSource:0}: Error finding container 57a5e4a4f568e220bf95c5cc5777a501e50f63f483d482fbdd7f493b5f0f956e: Status 404 returned error can't find the container with id 57a5e4a4f568e220bf95c5cc5777a501e50f63f483d482fbdd7f493b5f0f956e Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.162045 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.178475 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mzt8d" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.179265 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7b5b01_4061_4003_b002_a977260886c5.slice/crio-d2b8ac317eaf7ab0c9380dca0dc9c3834b9eac45964fbde72ca4aa8b60f725f5 WatchSource:0}: Error finding container d2b8ac317eaf7ab0c9380dca0dc9c3834b9eac45964fbde72ca4aa8b60f725f5: Status 404 returned error can't find the container with id d2b8ac317eaf7ab0c9380dca0dc9c3834b9eac45964fbde72ca4aa8b60f725f5 Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.210555 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.214500 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1116a7ff_8389_43fe_9f5f_b2b2c23ca9f1.slice/crio-a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c WatchSource:0}: Error finding container a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c: Status 404 returned error can't find the container with id a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.227333 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjwp7" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.234911 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-869lw" Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.250832 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081d7d48_c4b9_4725_bd12_32a95c02133f.slice/crio-a1ac3aab622a1fb6b84b459041c301523c71e8933a9a17b09668909578f12ea7 WatchSource:0}: Error finding container a1ac3aab622a1fb6b84b459041c301523c71e8933a9a17b09668909578f12ea7: Status 404 returned error can't find the container with id a1ac3aab622a1fb6b84b459041c301523c71e8933a9a17b09668909578f12ea7 Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.251060 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.251141 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.251158 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.251182 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.251199 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.257653 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" event={"ID":"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b","Type":"ContainerStarted","Data":"57a5e4a4f568e220bf95c5cc5777a501e50f63f483d482fbdd7f493b5f0f956e"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.258685 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8e9e61996767ed844c894047fa6d95965cd6e2d36d475fe23c491128363e188a"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.259588 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3fd252546b556d8d9f51c2b6d3f60061effdeb36a870c8471873c6314ae43f42"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.260504 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"d2b8ac317eaf7ab0c9380dca0dc9c3834b9eac45964fbde72ca4aa8b60f725f5"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.265871 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"af7f04ac061aa301d2283cca811b6e5ae839f61b9714f0e70e3fe2946df71f1a"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.267439 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerStarted","Data":"0e7c68c283aab5877d039fb126bbe00e93636ca818d15aa89a40226f08fe8123"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.268621 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c"} Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.280506 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd52383bb_5c8d_4eef_9df8_93143a9326d9.slice/crio-5eb1da185ac2f23a4f55f9d305e6128193c887edf4d963d46c089ae94565f61f WatchSource:0}: Error finding container 5eb1da185ac2f23a4f55f9d305e6128193c887edf4d963d46c089ae94565f61f: Status 404 returned error can't find the container with id 5eb1da185ac2f23a4f55f9d305e6128193c887edf4d963d46c089ae94565f61f Feb 26 08:14:25 crc kubenswrapper[4741]: W0226 08:14:25.328901 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81747676_7ef1_403b_8315_96e475f06342.slice/crio-1ed0169828100cbbd7971ac6f291e3ec311b8c819a375bd56831d3756ffad047 WatchSource:0}: Error finding container 1ed0169828100cbbd7971ac6f291e3ec311b8c819a375bd56831d3756ffad047: Status 404 returned error can't find the container with id 1ed0169828100cbbd7971ac6f291e3ec311b8c819a375bd56831d3756ffad047 Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.357143 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.357218 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.357231 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.357252 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.357286 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.410046 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.410300 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.410249485 +0000 UTC m=+101.406186872 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.410403 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.410444 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.410554 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.410621 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.410613245 +0000 UTC m=+101.406550632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.410635 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.410744 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.410719738 +0000 UTC m=+101.406657295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.462386 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.462427 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.462438 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.462454 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.462464 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.511152 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.511578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.511610 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511789 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511813 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511825 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511876 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.511861367 +0000 UTC m=+101.507798754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511897 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511921 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511934 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511970 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.51195942 +0000 UTC m=+101.507896807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.511989 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: E0226 08:14:25.512037 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:26.512023042 +0000 UTC m=+101.507960429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.569647 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.569704 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.569717 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.569750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.569762 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.672838 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.672895 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.672912 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.672937 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.672954 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.776479 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.776530 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.776541 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.776559 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.776571 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.791665 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.792439 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.794198 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.795289 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.796858 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.797655 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.798529 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.800440 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.801174 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.801566 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.802908 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.803720 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.805264 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.806089 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.807021 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.808414 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.809271 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.810620 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.811268 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.812086 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.813671 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.814464 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.816521 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.817283 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.820228 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.820440 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.821981 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.823747 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.825367 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.826156 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.828043 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.829072 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.830006 4741 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.830858 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.834092 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.834902 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.836470 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.838619 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.838744 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.839590 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.840936 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.841869 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.843358 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.844215 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.845605 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.846552 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.847946 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.848700 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.849979 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.850755 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.852328 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.853031 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.854694 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.855586 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.856613 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.858251 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.858918 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.859150 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.867906 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.877079 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.880350 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.880432 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.880455 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.880673 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.880762 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.889210 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.902652 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.924192 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.946479 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.980474 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.990329 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.990372 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.990382 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.990398 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:25 crc kubenswrapper[4741]: I0226 08:14:25.990408 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:25Z","lastTransitionTime":"2026-02-26T08:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.002773 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.018670 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.027378 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.094265 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.094316 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.094328 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.094347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.094360 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.197053 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.197141 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.197165 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.197189 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.197207 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.276302 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.276371 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.278782 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6" exitCode=0 Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.278841 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.278859 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerStarted","Data":"a1ac3aab622a1fb6b84b459041c301523c71e8933a9a17b09668909578f12ea7"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.280985 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-869lw" event={"ID":"81747676-7ef1-403b-8315-96e475f06342","Type":"ContainerStarted","Data":"e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.281040 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-869lw" event={"ID":"81747676-7ef1-403b-8315-96e475f06342","Type":"ContainerStarted","Data":"1ed0169828100cbbd7971ac6f291e3ec311b8c819a375bd56831d3756ffad047"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.283372 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerStarted","Data":"991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.290072 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.294089 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" event={"ID":"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b","Type":"ContainerStarted","Data":"cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.294184 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" event={"ID":"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b","Type":"ContainerStarted","Data":"1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.299483 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.300872 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.300925 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.301010 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.301030 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.301043 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.301060 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.301078 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.305057 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f" exitCode=0 Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.305159 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.308085 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjwp7" event={"ID":"d52383bb-5c8d-4eef-9df8-93143a9326d9","Type":"ContainerStarted","Data":"b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.308140 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjwp7" event={"ID":"d52383bb-5c8d-4eef-9df8-93143a9326d9","Type":"ContainerStarted","Data":"5eb1da185ac2f23a4f55f9d305e6128193c887edf4d963d46c089ae94565f61f"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.324088 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.341759 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.352169 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.364417 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.373506 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.391476 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.391546 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.391560 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.391581 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.391593 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.397767 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.408936 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.415978 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.416041 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.416062 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.416091 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.416144 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.426723 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.426860 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.426913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.427259 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.427383 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.427481 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.427460549 +0000 UTC m=+103.423397936 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.428183 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.428137528 +0000 UTC m=+103.424075025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.428532 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.428516199 +0000 UTC m=+103.424453826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.428559 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.432649 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.437671 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.437716 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.437731 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.437751 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.437765 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.443618 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.452478 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.458400 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.460686 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.460727 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.460738 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.460779 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.460791 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.476230 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.481524 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.486929 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.486987 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.486998 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.487016 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.487028 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.493154 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.500838 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.500974 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.503496 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.503544 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.503560 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.503582 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.503598 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.508351 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.523389 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.530883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.530963 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.531008 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531160 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531202 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531221 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531225 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531285 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531334 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531356 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531303 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.531278243 +0000 UTC m=+103.527215650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531451 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.531427057 +0000 UTC m=+103.527364464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.531470 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:28.531459558 +0000 UTC m=+103.527396965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.537519 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.554496 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.569135 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.582105 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.594535 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.605290 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.606639 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.606674 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.606685 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.606702 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.606717 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.619556 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.648646 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.660494 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.679060 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.698747 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.709900 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.709980 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.710000 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.710028 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.710048 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.718872 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.733216 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.747833 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.786573 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.786669 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.786717 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.786787 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.786726 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.786998 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.787098 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:26 crc kubenswrapper[4741]: E0226 08:14:26.787222 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.813254 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.813317 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.813331 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.813381 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.813396 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.917232 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.917561 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.917707 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.917848 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:26 crc kubenswrapper[4741]: I0226 08:14:26.917999 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:26Z","lastTransitionTime":"2026-02-26T08:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.021787 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.021853 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.021874 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.021901 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.021922 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.125263 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.125742 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.125829 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.125913 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.126015 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.230374 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.231453 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.231594 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.231709 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.231812 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.314488 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerStarted","Data":"f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.318152 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.318202 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.328650 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.336877 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.337077 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.337197 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.337276 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.337333 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.353076 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.368343 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.378561 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.389560 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.399450 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.411598 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.422396 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.433465 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.439427 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.439622 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.439804 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.440010 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.440206 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.447033 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.459034 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.470635 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.482349 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.493080 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:27Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.543092 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.543382 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.543486 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.543571 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.543646 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.646229 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.646586 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.646662 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.646732 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.646792 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.749363 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.749411 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.749425 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.749442 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.749454 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.852276 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.852644 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.852678 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.852699 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.852715 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.955055 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.955129 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.955150 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.955175 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:27 crc kubenswrapper[4741]: I0226 08:14:27.955193 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:27Z","lastTransitionTime":"2026-02-26T08:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.057419 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.057459 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.057468 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.057484 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.057496 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.160453 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.160490 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.160501 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.160519 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.160532 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.262895 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.264257 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.264300 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.264327 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.264342 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.327224 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.327285 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.327301 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.327314 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.329376 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2" exitCode=0 Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.329449 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.348043 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.367369 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.367435 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.367446 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.367466 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.367481 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.389959 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.409788 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.421356 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.433178 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.444913 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.455896 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.459297 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.459380 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.459404 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.459427 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.459409483 +0000 UTC m=+107.455346860 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.459466 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.459500 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.459491285 +0000 UTC m=+107.455428672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.459635 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.459749 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.459722291 +0000 UTC m=+107.455659878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.466866 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.469951 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.469989 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.469999 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.470015 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.470026 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.480645 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.492168 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.503060 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.516661 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.530388 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.540593 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.559998 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.560322 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.560357 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560250 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560410 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560426 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560494 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560509 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560516 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560647 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560495 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.5604725 +0000 UTC m=+107.556410067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560715 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.560698496 +0000 UTC m=+107.556636023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.560769 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:32.560740667 +0000 UTC m=+107.556678064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.572383 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.572425 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.572436 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.572454 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.572467 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.675729 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.675780 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.675795 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.675815 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.675829 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.779004 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.779074 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.779092 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.779138 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.779155 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.786342 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.786393 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.786423 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.786359 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.786571 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.786675 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.786852 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:28 crc kubenswrapper[4741]: E0226 08:14:28.787003 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.882339 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.882382 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.882396 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.882416 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.882427 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.986821 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.986881 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.986903 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.986936 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:28 crc kubenswrapper[4741]: I0226 08:14:28.986960 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:28Z","lastTransitionTime":"2026-02-26T08:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.089337 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.089388 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.089400 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.089420 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.089430 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.192468 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.192560 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.192586 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.192621 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.192645 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.295374 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.295431 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.295440 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.295459 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.295471 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.335256 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.338203 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799" exitCode=0 Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.338233 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.358549 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.381082 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.396833 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.402826 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.402879 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.402889 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.402909 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.402920 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.415618 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.432096 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.446819 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.462240 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.484232 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.504062 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.506336 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.506402 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.506420 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.506447 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.506465 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.516741 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.531529 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.546746 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.560446 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.579308 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.598941 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.614134 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.614181 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.614189 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.614205 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.614216 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.623939 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.642264 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.662807 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.679342 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.694598 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.705991 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.716836 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.716892 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.716904 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.716924 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.716943 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.723162 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.737560 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.755286 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.770590 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.786777 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.803170 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.807744 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.812559 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.818915 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.818948 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.818962 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.818984 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.819000 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.827272 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:29Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.921946 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.921995 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.922005 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.922025 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:29 crc kubenswrapper[4741]: I0226 08:14:29.922037 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:29Z","lastTransitionTime":"2026-02-26T08:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.024715 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.024790 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.024802 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.024822 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.024834 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.128505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.128572 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.128596 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.128628 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.128651 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.232246 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.232311 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.232329 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.232355 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.232374 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.336123 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.336176 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.336190 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.336217 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.336258 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.344917 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.348015 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.348647 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.352951 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c" exitCode=0 Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.353040 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.370876 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.401538 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.417738 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.432326 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.439817 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.439865 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.439880 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.439899 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.439910 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.446639 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.467158 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.485235 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.501567 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.519022 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.535604 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.542071 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.542143 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.542158 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.542180 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.542196 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.549946 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.572071 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.590934 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.608603 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.621832 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.637184 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.645283 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.645338 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.645357 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.645380 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.645397 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.653282 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.672874 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.685167 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.708174 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.723031 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.737422 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.748535 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.748573 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.748585 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.748604 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.748615 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.756814 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.779349 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.786269 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.786349 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.786289 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:30 crc kubenswrapper[4741]: E0226 08:14:30.786730 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.787418 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:30 crc kubenswrapper[4741]: E0226 08:14:30.787680 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:30 crc kubenswrapper[4741]: E0226 08:14:30.787938 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:30 crc kubenswrapper[4741]: E0226 08:14:30.788160 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.814696 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.833430 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.849899 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.851460 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.851519 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.851539 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.851567 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.851587 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.869178 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.888459 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.913266 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:30Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.955683 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.955739 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.955751 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.955769 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:30 crc kubenswrapper[4741]: I0226 08:14:30.955783 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:30Z","lastTransitionTime":"2026-02-26T08:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.058574 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.058658 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.058676 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.058702 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.058721 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.161627 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.161665 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.161674 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.161689 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.161700 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.264654 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.264698 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.264713 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.264734 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.264746 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.362377 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerStarted","Data":"00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.369631 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.369696 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.369720 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.369752 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.369778 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.371184 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.473461 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.473514 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.473527 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.473548 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.473562 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.576800 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.576841 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.576853 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.576871 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.576884 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.680507 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.680582 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.680602 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.680627 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.680645 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.784871 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.785373 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.785564 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.785750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.785931 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.890258 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.890301 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.890314 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.890347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.890361 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.993012 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.993056 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.993068 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.993087 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:31 crc kubenswrapper[4741]: I0226 08:14:31.993098 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:31Z","lastTransitionTime":"2026-02-26T08:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.095729 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.095795 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.095818 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.095846 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.095868 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.199541 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.199629 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.199649 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.200060 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.200290 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.302878 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.302956 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.302976 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.302998 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.303015 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.380696 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f" exitCode=0 Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.380780 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.397434 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.406449 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.406498 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.406514 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.406540 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.406558 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.425434 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.449895 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.466025 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.488358 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.505434 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.509940 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.510003 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.510020 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.510044 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.510060 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.514567 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.514765 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.514814 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.515146 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.515297 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.515247254 +0000 UTC m=+115.511184651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.515475 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.515477 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.515431609 +0000 UTC m=+115.511369026 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.515593 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.515561913 +0000 UTC m=+115.511499480 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.521730 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.536215 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.552442 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.568008 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.585051 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.604420 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.612392 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.612440 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.612452 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.612471 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.612483 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.617189 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617388 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.617396 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617488 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.617463803 +0000 UTC m=+115.613401200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.617584 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617710 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617729 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617741 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617773 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.617764682 +0000 UTC m=+115.613702279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617937 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617969 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.617983 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.618060 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:40.618041389 +0000 UTC m=+115.613978786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.621311 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.636545 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.652965 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:32Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.715679 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.715720 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.715731 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.715750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.715759 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.786428 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.786466 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.786606 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.786428 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.786812 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.786931 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.787254 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:32 crc kubenswrapper[4741]: E0226 08:14:32.787378 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.820701 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.821060 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.821258 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.821376 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.821489 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.924750 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.925164 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.925251 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.925347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:32 crc kubenswrapper[4741]: I0226 08:14:32.925425 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:32Z","lastTransitionTime":"2026-02-26T08:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.028480 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.028816 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.028900 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.028992 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.029072 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.132264 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.132641 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.132656 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.132676 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.132689 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.235996 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.236043 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.236056 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.236076 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.236090 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.338918 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.338955 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.338967 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.338987 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.339000 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.392270 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.393480 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.393520 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.393589 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.402872 4741 generic.go:334] "Generic (PLEG): container finished" podID="081d7d48-c4b9-4725-bd12-32a95c02133f" containerID="b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355" exitCode=0 Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.402937 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerDied","Data":"b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.411930 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.427772 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.441864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.441944 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.441964 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.441988 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.442037 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.447940 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.470526 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.482503 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.483947 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.489184 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.509255 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.528529 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545193 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545735 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545788 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545809 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545830 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.545843 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.558142 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.576311 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.597788 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.618511 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.632552 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.648340 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.649423 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.649472 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.649485 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.649504 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.649517 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.662658 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.676690 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.696734 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.715749 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.736079 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752413 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752470 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752483 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752521 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.752591 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.762465 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.772490 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.790165 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.819447 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.840690 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.855324 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.855383 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.855397 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.855418 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.855433 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.863314 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.876489 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.890802 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.907578 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.920516 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:33Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.957865 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.957918 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.957929 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.957947 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:33 crc kubenswrapper[4741]: I0226 08:14:33.957959 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:33Z","lastTransitionTime":"2026-02-26T08:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.060507 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.060569 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.060589 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.060612 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.060630 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.163281 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.163316 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.163327 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.163344 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.163356 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.266072 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.266143 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.266160 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.266184 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.266203 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.369603 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.369649 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.369666 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.369691 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.369709 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.413230 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" event={"ID":"081d7d48-c4b9-4725-bd12-32a95c02133f","Type":"ContainerStarted","Data":"d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.429979 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.452551 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.470309 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.473027 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.473901 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.473925 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.473991 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.474007 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.484223 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.500802 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.519187 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.544300 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.555258 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.571157 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.577132 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.577166 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.577176 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.577188 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.577199 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.590602 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.610606 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.626637 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.644705 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.660872 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.676148 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:34Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.680165 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.680219 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.680231 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.680250 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.680274 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.782975 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.783024 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.783039 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.783057 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.783070 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.786581 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.786662 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:34 crc kubenswrapper[4741]: E0226 08:14:34.786694 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:34 crc kubenswrapper[4741]: E0226 08:14:34.786782 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.786834 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:34 crc kubenswrapper[4741]: E0226 08:14:34.786897 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.786981 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:34 crc kubenswrapper[4741]: E0226 08:14:34.787057 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.885714 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.885748 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.885756 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.885769 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.885779 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.990900 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.990960 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.990977 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.991006 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:34 crc kubenswrapper[4741]: I0226 08:14:34.991026 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:34Z","lastTransitionTime":"2026-02-26T08:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.094946 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.095024 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.095050 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.095081 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.095152 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.198585 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.198625 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.198637 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.198655 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.198667 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.301553 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.301630 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.301649 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.301677 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.301699 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.404709 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.404778 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.404790 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.404811 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.404824 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.508553 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.508614 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.508627 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.508647 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.508661 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.611268 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.611318 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.611329 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.611347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.611359 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.714688 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.714740 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.714752 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.714770 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.714782 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.801677 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.817640 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.817698 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.817712 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.817736 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.817751 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.820483 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.835726 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.848649 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.862648 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.875281 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.888651 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.913733 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.921690 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.921746 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.921765 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.921791 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.921809 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:35Z","lastTransitionTime":"2026-02-26T08:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.931182 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.953869 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.971498 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:35 crc kubenswrapper[4741]: I0226 08:14:35.993036 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.017293 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.025597 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.025663 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.025714 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.025747 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.025767 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.037313 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.050431 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.129337 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.129430 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.129455 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.129487 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.129510 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.232997 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.233077 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.233091 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.233156 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.233173 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.336601 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.336650 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.336663 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.336682 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.336694 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.422892 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/0.log" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.426638 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506" exitCode=1 Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.426721 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.428593 4741 scope.go:117] "RemoveContainer" containerID="99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.439684 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.439720 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.439728 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.439746 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.439757 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.449920 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.467226 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.481369 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.494778 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.512315 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.526360 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.540713 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.542590 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.542644 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.542659 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.542680 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.542699 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.555445 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.564468 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.564509 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.564521 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.564541 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.564552 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.575954 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.587979 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.592921 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.593356 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.593388 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.593399 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.593417 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.593429 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.605349 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.610968 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.615861 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.615929 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.615944 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.616342 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.616387 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.619556 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.630718 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.634385 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.634420 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.634432 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.634450 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.634464 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.640526 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\" 6732 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:36.308154 6732 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 08:14:36.308189 6732 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 08:14:36.308235 6732 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:36.308262 6732 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:36.308283 6732 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:36.308297 6732 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:36.308300 6732 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:36.308311 6732 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:36.308317 6732 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:36.308319 6732 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:36.308330 6732 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:36.308375 6732 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 08:14:36.308381 6732 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:36.308403 6732 factory.go:656] Stopping watch factory\\\\nI0226 08:14:36.308420 6732 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.650289 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.654791 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.654975 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.655003 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.655172 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.655193 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.655207 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.669172 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.670426 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.670589 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.672277 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.672319 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.672332 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.672350 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.672361 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.775518 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.775593 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.775613 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.775637 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.775658 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.786802 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.786831 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.786872 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.786930 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.786965 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.787154 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.787202 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:36 crc kubenswrapper[4741]: E0226 08:14:36.787253 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.879174 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.879239 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.879260 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.879289 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.879308 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.982658 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.982715 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.982734 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.982762 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:36 crc kubenswrapper[4741]: I0226 08:14:36.982785 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:36Z","lastTransitionTime":"2026-02-26T08:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.086549 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.086647 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.086659 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.086682 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.086695 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.189740 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.190183 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.190317 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.190457 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.190567 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.293398 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.293490 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.293509 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.293543 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.293562 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.396708 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.396775 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.396792 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.396819 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.396837 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.434065 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/0.log" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.438356 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.499337 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.499407 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.499428 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.499515 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.499538 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.603404 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.603454 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.603471 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.603495 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.603512 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.706569 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.706643 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.706662 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.706688 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.706706 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.809803 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.809879 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.809897 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.809926 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.809946 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.913515 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.913578 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.913595 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.913617 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:37 crc kubenswrapper[4741]: I0226 08:14:37.913641 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:37Z","lastTransitionTime":"2026-02-26T08:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.017306 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.017371 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.017382 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.017402 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.017415 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.120548 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.120608 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.120626 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.120650 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.120670 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.224535 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.224630 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.224664 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.224697 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.224717 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.328653 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.328710 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.328719 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.328736 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.328747 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.431885 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.431934 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.431947 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.431966 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.431978 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.442198 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.455864 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.476352 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.492462 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.507747 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.525546 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.535327 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.535363 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.535375 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.535393 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.535406 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.539657 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.561048 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\" 6732 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:36.308154 6732 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 08:14:36.308189 6732 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 08:14:36.308235 6732 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:36.308262 6732 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:36.308283 6732 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:36.308297 6732 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:36.308300 6732 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:36.308311 6732 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:36.308317 6732 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:36.308319 6732 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:36.308330 6732 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:36.308375 6732 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 08:14:36.308381 6732 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:36.308403 6732 factory.go:656] Stopping watch factory\\\\nI0226 08:14:36.308420 6732 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.575474 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.586384 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.602606 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.618465 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.628958 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.638263 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.638311 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.638323 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.638342 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.638353 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.642803 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.654443 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.665164 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.741304 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.741368 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.741384 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.741402 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.741415 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.786691 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.786714 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.786758 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:38 crc kubenswrapper[4741]: E0226 08:14:38.786893 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.786948 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:38 crc kubenswrapper[4741]: E0226 08:14:38.787179 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:38 crc kubenswrapper[4741]: E0226 08:14:38.787260 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:38 crc kubenswrapper[4741]: E0226 08:14:38.787340 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.845756 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.845849 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.845868 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.845894 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.845912 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.948802 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.948864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.948882 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.948904 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:38 crc kubenswrapper[4741]: I0226 08:14:38.948923 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:38Z","lastTransitionTime":"2026-02-26T08:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.051864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.051924 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.051944 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.051966 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.051982 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.155920 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.155996 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.156019 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.156048 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.156070 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.260212 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.260290 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.260313 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.260343 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.260365 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.364336 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.364415 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.364433 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.364462 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.364482 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.467746 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.467796 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.467809 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.467829 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.467840 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.571241 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.571308 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.571327 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.571357 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.571414 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.674850 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.674905 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.674921 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.674946 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.674960 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.777976 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.778034 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.778043 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.778061 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.778071 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.881953 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.882325 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.882346 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.882369 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.882388 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.985896 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.986042 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.986442 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.986488 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:39 crc kubenswrapper[4741]: I0226 08:14:39.986561 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:39Z","lastTransitionTime":"2026-02-26T08:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.094249 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.094335 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.094362 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.094597 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.094657 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.198637 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.198702 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.198720 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.198744 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.198760 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.302482 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.302560 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.302655 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.302688 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.302706 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.406432 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.406504 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.406527 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.406556 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.406578 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.510214 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.510281 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.510299 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.510324 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.510344 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.518638 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.518792 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.518762491 +0000 UTC m=+131.514699918 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.518913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.518970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.519090 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.519211 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.519196443 +0000 UTC m=+131.515133870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.520182 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.520277 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.520256763 +0000 UTC m=+131.516194190 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.613517 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.613577 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.613600 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.613634 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.613657 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.620027 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.620154 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.620211 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620288 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620326 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620342 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620346 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620408 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.620385374 +0000 UTC m=+131.616322801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620432 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.620420365 +0000 UTC m=+131.616357792 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620494 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620546 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620568 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.620657 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:14:56.620623881 +0000 UTC m=+131.616561298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.717379 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.717442 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.717460 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.717485 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.717507 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.786533 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.786623 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.786546 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.786532 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.786783 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.786938 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.787062 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:40 crc kubenswrapper[4741]: E0226 08:14:40.787236 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.820397 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.820571 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.820603 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.820686 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.820771 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.925576 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.925627 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.925640 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.925659 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:40 crc kubenswrapper[4741]: I0226 08:14:40.925672 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:40Z","lastTransitionTime":"2026-02-26T08:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.030547 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.030605 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.030620 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.030641 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.030657 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.132893 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.132957 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.132976 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.132996 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.133010 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.236677 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.236745 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.236769 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.236791 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.236806 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.340323 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.340402 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.340420 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.340450 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.340469 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.443780 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.443856 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.443876 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.443905 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.443931 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.459199 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/1.log" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.460313 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/0.log" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.464814 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a" exitCode=1 Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.464890 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.464963 4741 scope.go:117] "RemoveContainer" containerID="99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.466257 4741 scope.go:117] "RemoveContainer" containerID="8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a" Feb 26 08:14:41 crc kubenswrapper[4741]: E0226 08:14:41.466706 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.484658 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.499298 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.517757 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.540776 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.552443 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.552505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.552523 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.552563 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.552582 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.557738 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.572957 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.588198 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.607445 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.621146 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.637279 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.655600 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.658391 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.658438 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.658455 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.658480 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.658500 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.675472 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\" 6732 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:36.308154 6732 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 08:14:36.308189 6732 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 08:14:36.308235 6732 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:36.308262 6732 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:36.308283 6732 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:36.308297 6732 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:36.308300 6732 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:36.308311 6732 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:36.308317 6732 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:36.308319 6732 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:36.308330 6732 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:36.308375 6732 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 08:14:36.308381 6732 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:36.308403 6732 factory.go:656] Stopping watch factory\\\\nI0226 08:14:36.308420 6732 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.693279 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.705763 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.715436 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:41Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.761789 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.761832 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.761841 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.761856 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.761866 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.865207 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.865262 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.865272 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.865289 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.865299 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.967761 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.967837 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.967855 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.967885 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:41 crc kubenswrapper[4741]: I0226 08:14:41.967908 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:41Z","lastTransitionTime":"2026-02-26T08:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.071456 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.071537 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.071557 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.071585 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.071606 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.174475 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.174544 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.174561 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.174593 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.174616 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.278456 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.278498 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.278506 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.278521 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.278530 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.382992 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.383061 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.383078 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.383106 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.383159 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.472902 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/1.log" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.486313 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.486383 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.486412 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.486439 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.486464 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.590650 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.590736 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.590749 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.590798 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.590815 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.695519 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.695592 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.695611 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.695641 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.695665 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.786204 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.786210 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:42 crc kubenswrapper[4741]: E0226 08:14:42.786457 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.786244 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.786219 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:42 crc kubenswrapper[4741]: E0226 08:14:42.786632 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:42 crc kubenswrapper[4741]: E0226 08:14:42.786796 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:42 crc kubenswrapper[4741]: E0226 08:14:42.786953 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.798467 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.798515 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.798528 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.798548 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.798560 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.901900 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.901961 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.901971 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.901992 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:42 crc kubenswrapper[4741]: I0226 08:14:42.902003 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:42Z","lastTransitionTime":"2026-02-26T08:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.004426 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.004458 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.004499 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.004534 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.004547 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.107729 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.107888 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.107906 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.107924 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.107937 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.212025 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.212106 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.212168 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.212193 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.212214 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.316327 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.316406 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.316430 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.316460 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.316486 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.421243 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.421314 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.421342 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.421368 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.421388 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.525331 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.525401 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.525419 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.525451 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.525472 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.629319 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.629379 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.629401 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.629436 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.629460 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.718395 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.733921 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.734009 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.734031 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.734061 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.734090 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.744405 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.762413 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.783065 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.815654 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.842779 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.842850 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.842870 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.842894 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.842911 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.848943 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.863789 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.883821 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.904488 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.917888 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.928907 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.944233 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.946306 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.946375 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.946391 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.946412 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.946426 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:43Z","lastTransitionTime":"2026-02-26T08:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.976551 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\" 6732 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:36.308154 6732 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 08:14:36.308189 6732 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 08:14:36.308235 6732 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:36.308262 6732 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:36.308283 6732 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:36.308297 6732 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:36.308300 6732 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:36.308311 6732 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:36.308317 6732 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:36.308319 6732 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:36.308330 6732 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:36.308375 6732 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 08:14:36.308381 6732 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:36.308403 6732 factory.go:656] Stopping watch factory\\\\nI0226 08:14:36.308420 6732 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:43 crc kubenswrapper[4741]: I0226 08:14:43.995141 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:43Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.012861 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:44Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.027080 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:44Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.050590 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.050673 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.050693 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.050719 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.050742 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.153900 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.153978 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.153998 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.154028 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.154046 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.257812 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.257894 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.257916 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.257946 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.257968 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.361344 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.361415 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.361436 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.361468 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.361490 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.464830 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.464907 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.464929 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.464962 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.464983 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.567857 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.567902 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.567911 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.567927 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.567939 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.671243 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.671505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.671527 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.671556 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.671575 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.775780 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.775852 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.775874 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.775905 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.775928 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.786157 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.786196 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.786175 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.786280 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:44 crc kubenswrapper[4741]: E0226 08:14:44.786415 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:44 crc kubenswrapper[4741]: E0226 08:14:44.786540 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:44 crc kubenswrapper[4741]: E0226 08:14:44.786743 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:44 crc kubenswrapper[4741]: E0226 08:14:44.786840 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.879987 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.880035 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.880055 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.880084 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.880106 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.983699 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.984071 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.984085 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.984105 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:44 crc kubenswrapper[4741]: I0226 08:14:44.984139 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:44Z","lastTransitionTime":"2026-02-26T08:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.086764 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.086816 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.086829 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.086852 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.086868 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.190214 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.190348 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.190384 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.190404 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.190416 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.293443 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.293494 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.293505 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.293523 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.293533 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.396264 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.396298 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.396308 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.396333 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.396344 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.498870 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.498932 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.498945 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.498969 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.498983 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.603291 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.603356 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.603379 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.603406 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.603425 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:45Z","lastTransitionTime":"2026-02-26T08:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:45 crc kubenswrapper[4741]: E0226 08:14:45.703883 4741 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.803783 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.818058 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.834027 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.847746 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.866638 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.883976 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: E0226 08:14:45.898535 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.903130 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.922685 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.939867 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.954326 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:45 crc kubenswrapper[4741]: I0226 08:14:45.977289 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99177e2b9119b6aa19860d8f922f3fb3885a50551888e3556591aeb31ce93506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:36Z\\\",\\\"message\\\":\\\" 6732 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:36.308154 6732 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 08:14:36.308189 6732 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 08:14:36.308235 6732 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:36.308262 6732 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:36.308283 6732 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:36.308297 6732 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:36.308300 6732 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:36.308311 6732 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:36.308317 6732 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:36.308319 6732 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:36.308330 6732 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:36.308375 6732 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 08:14:36.308381 6732 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:36.308403 6732 factory.go:656] Stopping watch factory\\\\nI0226 08:14:36.308420 6732 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.000003 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.015619 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.038099 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.054965 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.786964 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.786997 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.787093 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.787307 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.787389 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.787630 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.787925 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.787866 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.793720 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.793831 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.793849 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.793874 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.793889 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:46Z","lastTransitionTime":"2026-02-26T08:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.815069 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.818717 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.818745 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.818754 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.818768 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.818779 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:46Z","lastTransitionTime":"2026-02-26T08:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.835292 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.839631 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.839667 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.839679 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.839696 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.839709 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:46Z","lastTransitionTime":"2026-02-26T08:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.856937 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.861950 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.861982 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.861994 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.862009 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.862020 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:46Z","lastTransitionTime":"2026-02-26T08:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.881394 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.886858 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.886910 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.886924 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.886946 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:46 crc kubenswrapper[4741]: I0226 08:14:46.886962 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:46Z","lastTransitionTime":"2026-02-26T08:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.903993 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:46 crc kubenswrapper[4741]: E0226 08:14:46.904297 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:47 crc kubenswrapper[4741]: I0226 08:14:47.802905 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 08:14:48 crc kubenswrapper[4741]: I0226 08:14:48.787177 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:48 crc kubenswrapper[4741]: I0226 08:14:48.787241 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:48 crc kubenswrapper[4741]: E0226 08:14:48.787348 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:48 crc kubenswrapper[4741]: I0226 08:14:48.787458 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:48 crc kubenswrapper[4741]: E0226 08:14:48.787610 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:48 crc kubenswrapper[4741]: I0226 08:14:48.787745 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:48 crc kubenswrapper[4741]: E0226 08:14:48.787817 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:48 crc kubenswrapper[4741]: E0226 08:14:48.787964 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:50 crc kubenswrapper[4741]: I0226 08:14:50.786193 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:50 crc kubenswrapper[4741]: I0226 08:14:50.786308 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:50 crc kubenswrapper[4741]: I0226 08:14:50.786376 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:50 crc kubenswrapper[4741]: I0226 08:14:50.786368 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:50 crc kubenswrapper[4741]: E0226 08:14:50.786545 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:50 crc kubenswrapper[4741]: E0226 08:14:50.786679 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:50 crc kubenswrapper[4741]: E0226 08:14:50.786804 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:50 crc kubenswrapper[4741]: E0226 08:14:50.786890 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:50 crc kubenswrapper[4741]: E0226 08:14:50.900502 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:14:52 crc kubenswrapper[4741]: I0226 08:14:52.786660 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:52 crc kubenswrapper[4741]: I0226 08:14:52.786760 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:52 crc kubenswrapper[4741]: E0226 08:14:52.786820 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:52 crc kubenswrapper[4741]: I0226 08:14:52.786870 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:52 crc kubenswrapper[4741]: I0226 08:14:52.786866 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:52 crc kubenswrapper[4741]: E0226 08:14:52.787054 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:52 crc kubenswrapper[4741]: E0226 08:14:52.787144 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:52 crc kubenswrapper[4741]: E0226 08:14:52.787277 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.787802 4741 scope.go:117] "RemoveContainer" containerID="8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.809568 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.837025 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.856768 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.878990 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.901086 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.921478 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.936234 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.955161 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.971763 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:53 crc kubenswrapper[4741]: I0226 08:14:53.993022 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:53Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.019576 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.046260 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.069768 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.088869 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.106905 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.129544 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.526717 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/1.log" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.529957 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7"} Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.530616 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.550051 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.564298 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.580622 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.595200 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.606405 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.616471 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.627639 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.641423 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.650992 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.664508 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.675032 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.692174 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.706761 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.728618 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.744542 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.760602 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:54Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.787008 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.787098 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:54 crc kubenswrapper[4741]: E0226 08:14:54.787198 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.787231 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:54 crc kubenswrapper[4741]: E0226 08:14:54.787378 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:54 crc kubenswrapper[4741]: E0226 08:14:54.787435 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:54 crc kubenswrapper[4741]: I0226 08:14:54.787449 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:54 crc kubenswrapper[4741]: E0226 08:14:54.787530 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.536728 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/2.log" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.537850 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/1.log" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.542233 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" exitCode=1 Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.542298 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7"} Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.542394 4741 scope.go:117] "RemoveContainer" containerID="8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.543379 4741 scope.go:117] "RemoveContainer" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" Feb 26 08:14:55 crc kubenswrapper[4741]: E0226 08:14:55.543652 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.566060 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.585241 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.603580 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.626660 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.645299 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.661603 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.675399 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.691722 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.706728 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.723789 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.739162 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.756867 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.769692 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.783881 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.803568 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.815213 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.830375 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.844667 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.860402 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.874275 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.892966 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: E0226 08:14:55.902403 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.909243 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.928019 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.943625 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.958959 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.975387 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:55 crc kubenswrapper[4741]: I0226 08:14:55.989618 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:55Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.008204 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.026407 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.057447 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c09ca6680e89ba10710baab052c78817a4b07f9dd3db9a0f971161097f7824a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:40Z\\\",\\\"message\\\":\\\"ift-apiserver/check-endpoints-k7smk as it is not a known egress service\\\\nI0226 08:14:39.940606 6873 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 08:14:39.940647 6873 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 08:14:39.940681 6873 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 08:14:39.940709 6873 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 08:14:39.940913 6873 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 08:14:39.940981 6873 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 08:14:39.941141 6873 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 08:14:39.941156 6873 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 08:14:39.941179 6873 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 08:14:39.941211 6873 factory.go:656] Stopping watch factory\\\\nI0226 08:14:39.941229 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0226 08:14:39.941264 6873 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 08:14:39.941276 6873 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 08:14:39.941289 6873 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 08:14:39.941299 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 08:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.081751 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.102399 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.547905 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/2.log" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.551500 4741 scope.go:117] "RemoveContainer" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.551712 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.567937 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.589534 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.604453 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.618981 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.619186 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.619219 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.619374 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.619440 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.619423669 +0000 UTC m=+163.615361056 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.619524 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.61947235 +0000 UTC m=+163.615409777 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.619774 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.619967 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.619890292 +0000 UTC m=+163.615827709 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.624323 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.643618 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.657629 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.672396 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.688568 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.704423 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.720421 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.720538 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.720593 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720622 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720700 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.720680312 +0000 UTC m=+163.716617699 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720838 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720884 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720915 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.720988 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.72096516 +0000 UTC m=+163.716902557 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.721089 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.721153 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.721165 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.721224 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:15:28.721212106 +0000 UTC m=+163.717149493 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.727969 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.748685 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.770640 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.785460 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.786569 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.786613 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.786584 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.786584 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.786732 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.786844 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.786916 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:56 crc kubenswrapper[4741]: E0226 08:14:56.786958 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.802428 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.818362 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:56 crc kubenswrapper[4741]: I0226 08:14:56.834289 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:56Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.197865 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.197922 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.197934 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.197954 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.197968 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:57Z","lastTransitionTime":"2026-02-26T08:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.220267 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:57Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.225779 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.225830 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.225843 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.225864 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.225878 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:57Z","lastTransitionTime":"2026-02-26T08:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.248698 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:57Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.257038 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.257102 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.257136 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.257159 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.257173 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:57Z","lastTransitionTime":"2026-02-26T08:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.278848 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:57Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.285332 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.285389 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.285408 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.285432 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.285448 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:57Z","lastTransitionTime":"2026-02-26T08:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.305701 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:57Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.310454 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.310519 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.310537 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.310564 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:14:57 crc kubenswrapper[4741]: I0226 08:14:57.310583 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:14:57Z","lastTransitionTime":"2026-02-26T08:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.326667 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:14:57Z is after 2025-08-24T17:21:41Z" Feb 26 08:14:57 crc kubenswrapper[4741]: E0226 08:14:57.326892 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:14:58 crc kubenswrapper[4741]: I0226 08:14:58.786193 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:14:58 crc kubenswrapper[4741]: I0226 08:14:58.786211 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:14:58 crc kubenswrapper[4741]: I0226 08:14:58.786252 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:14:58 crc kubenswrapper[4741]: I0226 08:14:58.786295 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:14:58 crc kubenswrapper[4741]: E0226 08:14:58.786924 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:14:58 crc kubenswrapper[4741]: E0226 08:14:58.787005 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:14:58 crc kubenswrapper[4741]: E0226 08:14:58.787130 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:14:58 crc kubenswrapper[4741]: E0226 08:14:58.787205 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:00 crc kubenswrapper[4741]: I0226 08:15:00.786212 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:00 crc kubenswrapper[4741]: I0226 08:15:00.786336 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:00 crc kubenswrapper[4741]: E0226 08:15:00.786405 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:00 crc kubenswrapper[4741]: I0226 08:15:00.786560 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:00 crc kubenswrapper[4741]: I0226 08:15:00.786560 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:00 crc kubenswrapper[4741]: E0226 08:15:00.786728 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:00 crc kubenswrapper[4741]: E0226 08:15:00.786849 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:00 crc kubenswrapper[4741]: E0226 08:15:00.787234 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:00 crc kubenswrapper[4741]: E0226 08:15:00.904251 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:02 crc kubenswrapper[4741]: I0226 08:15:02.786931 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:02 crc kubenswrapper[4741]: I0226 08:15:02.786949 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:02 crc kubenswrapper[4741]: E0226 08:15:02.787128 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:02 crc kubenswrapper[4741]: I0226 08:15:02.787188 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:02 crc kubenswrapper[4741]: I0226 08:15:02.787202 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:02 crc kubenswrapper[4741]: E0226 08:15:02.787391 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:02 crc kubenswrapper[4741]: E0226 08:15:02.787489 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:02 crc kubenswrapper[4741]: E0226 08:15:02.787589 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:03 crc kubenswrapper[4741]: I0226 08:15:03.804475 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 08:15:04 crc kubenswrapper[4741]: I0226 08:15:04.786989 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:04 crc kubenswrapper[4741]: I0226 08:15:04.787234 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:04 crc kubenswrapper[4741]: I0226 08:15:04.787311 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:04 crc kubenswrapper[4741]: I0226 08:15:04.787736 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:04 crc kubenswrapper[4741]: E0226 08:15:04.787896 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:04 crc kubenswrapper[4741]: E0226 08:15:04.788011 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:04 crc kubenswrapper[4741]: E0226 08:15:04.788134 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:04 crc kubenswrapper[4741]: E0226 08:15:04.788176 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.808625 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.833884 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.850885 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.874581 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.895356 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: E0226 08:15:05.921998 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.927001 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.945688 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.961652 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.979530 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:05 crc kubenswrapper[4741]: I0226 08:15:05.996139 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:05Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.024760 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.044714 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.065303 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.081960 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.098819 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.116198 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.135665 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:06Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.842095 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:06 crc kubenswrapper[4741]: E0226 08:15:06.842344 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.842397 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.842459 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:06 crc kubenswrapper[4741]: I0226 08:15:06.842401 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:06 crc kubenswrapper[4741]: E0226 08:15:06.842580 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:06 crc kubenswrapper[4741]: E0226 08:15:06.842667 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:06 crc kubenswrapper[4741]: E0226 08:15:06.842771 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.674052 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.674100 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.674129 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.674151 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.674164 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:07Z","lastTransitionTime":"2026-02-26T08:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.691000 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:07Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.696358 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.696433 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.696455 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.696485 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.696506 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:07Z","lastTransitionTime":"2026-02-26T08:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.716517 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:07Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.722557 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.722637 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.722659 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.722688 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.722710 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:07Z","lastTransitionTime":"2026-02-26T08:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.745667 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:07Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.751834 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.751887 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.751906 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.751934 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.751952 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:07Z","lastTransitionTime":"2026-02-26T08:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.775339 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:07Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.781239 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.781306 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.781325 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.781352 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:07 crc kubenswrapper[4741]: I0226 08:15:07.781375 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:07Z","lastTransitionTime":"2026-02-26T08:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.802801 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:07Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:07 crc kubenswrapper[4741]: E0226 08:15:07.803028 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:15:08 crc kubenswrapper[4741]: I0226 08:15:08.786661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:08 crc kubenswrapper[4741]: I0226 08:15:08.786749 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:08 crc kubenswrapper[4741]: I0226 08:15:08.786661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:08 crc kubenswrapper[4741]: E0226 08:15:08.786956 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:08 crc kubenswrapper[4741]: E0226 08:15:08.787058 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:08 crc kubenswrapper[4741]: E0226 08:15:08.787336 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:08 crc kubenswrapper[4741]: I0226 08:15:08.787461 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:08 crc kubenswrapper[4741]: E0226 08:15:08.788205 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:08 crc kubenswrapper[4741]: I0226 08:15:08.788736 4741 scope.go:117] "RemoveContainer" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" Feb 26 08:15:08 crc kubenswrapper[4741]: E0226 08:15:08.789015 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:15:10 crc kubenswrapper[4741]: I0226 08:15:10.787398 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:10 crc kubenswrapper[4741]: I0226 08:15:10.787459 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:10 crc kubenswrapper[4741]: I0226 08:15:10.787463 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:10 crc kubenswrapper[4741]: I0226 08:15:10.787783 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:10 crc kubenswrapper[4741]: E0226 08:15:10.787712 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:10 crc kubenswrapper[4741]: E0226 08:15:10.787976 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:10 crc kubenswrapper[4741]: E0226 08:15:10.788083 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:10 crc kubenswrapper[4741]: E0226 08:15:10.788195 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:10 crc kubenswrapper[4741]: E0226 08:15:10.924161 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:12 crc kubenswrapper[4741]: I0226 08:15:12.786980 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:12 crc kubenswrapper[4741]: I0226 08:15:12.787067 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:12 crc kubenswrapper[4741]: E0226 08:15:12.787246 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:12 crc kubenswrapper[4741]: E0226 08:15:12.787416 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:12 crc kubenswrapper[4741]: I0226 08:15:12.787489 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:12 crc kubenswrapper[4741]: I0226 08:15:12.787557 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:12 crc kubenswrapper[4741]: E0226 08:15:12.787768 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:12 crc kubenswrapper[4741]: E0226 08:15:12.787864 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.877200 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/0.log" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.877267 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fd732e7-0e36-485f-b750-856d6869e697" containerID="991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc" exitCode=1 Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.877315 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerDied","Data":"991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc"} Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.877906 4741 scope.go:117] "RemoveContainer" containerID="991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.903194 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:13Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.926324 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:13Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.949039 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:13Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.972717 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:13Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:13 crc kubenswrapper[4741]: I0226 08:15:13.991599 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:13Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.012168 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.028877 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.045098 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.070331 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.095624 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.115474 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.136551 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.159346 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.183873 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.203311 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.223545 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.245517 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.786953 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.787059 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.787059 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:14 crc kubenswrapper[4741]: E0226 08:15:14.787223 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.787286 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:14 crc kubenswrapper[4741]: E0226 08:15:14.787504 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:14 crc kubenswrapper[4741]: E0226 08:15:14.787668 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:14 crc kubenswrapper[4741]: E0226 08:15:14.787857 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.885161 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/0.log" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.885284 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerStarted","Data":"8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10"} Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.905937 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.929479 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.946298 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.966529 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:14 crc kubenswrapper[4741]: I0226 08:15:14.985607 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:14Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.007690 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.028832 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.050062 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.072205 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.093599 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.118841 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.135410 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.154941 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.188561 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.216100 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.231427 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.248652 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.806639 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.822798 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.843167 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.866742 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.890666 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.911946 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: E0226 08:15:15.928383 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.930183 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.951472 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.972794 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:15 crc kubenswrapper[4741]: I0226 08:15:15.992017 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:15Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.014069 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.038614 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.053699 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.069185 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.083575 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.097669 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.109442 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:16Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.787257 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:16 crc kubenswrapper[4741]: E0226 08:15:16.787515 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.787897 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:16 crc kubenswrapper[4741]: E0226 08:15:16.788005 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.789165 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:16 crc kubenswrapper[4741]: E0226 08:15:16.789413 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.789168 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:16 crc kubenswrapper[4741]: E0226 08:15:16.789810 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:16 crc kubenswrapper[4741]: I0226 08:15:16.808939 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.912385 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.912486 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.912508 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.912536 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.912557 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:17Z","lastTransitionTime":"2026-02-26T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:17 crc kubenswrapper[4741]: E0226 08:15:17.937494 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:17Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.944066 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.944156 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.944177 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.944206 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.944227 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:17Z","lastTransitionTime":"2026-02-26T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:17 crc kubenswrapper[4741]: E0226 08:15:17.966518 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:17Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.972585 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.972664 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.972691 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.972745 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:17 crc kubenswrapper[4741]: I0226 08:15:17.972770 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:17Z","lastTransitionTime":"2026-02-26T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:17 crc kubenswrapper[4741]: E0226 08:15:17.996649 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:17Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.014887 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.014977 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.014997 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.015038 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.015059 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:18Z","lastTransitionTime":"2026-02-26T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.040852 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:18Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.046951 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.047039 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.047059 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.047083 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.047103 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:18Z","lastTransitionTime":"2026-02-26T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.069457 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:18Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.069838 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.787246 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.787827 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.787890 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.787960 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.788350 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.788260 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.788577 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:18 crc kubenswrapper[4741]: E0226 08:15:18.788775 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:18 crc kubenswrapper[4741]: I0226 08:15:18.806225 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 08:15:19 crc kubenswrapper[4741]: I0226 08:15:19.787520 4741 scope.go:117] "RemoveContainer" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" Feb 26 08:15:20 crc kubenswrapper[4741]: I0226 08:15:20.786645 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:20 crc kubenswrapper[4741]: I0226 08:15:20.786645 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:20 crc kubenswrapper[4741]: E0226 08:15:20.787440 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:20 crc kubenswrapper[4741]: I0226 08:15:20.786699 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:20 crc kubenswrapper[4741]: I0226 08:15:20.786690 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:20 crc kubenswrapper[4741]: E0226 08:15:20.787550 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:20 crc kubenswrapper[4741]: E0226 08:15:20.787648 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:20 crc kubenswrapper[4741]: E0226 08:15:20.787900 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:20 crc kubenswrapper[4741]: I0226 08:15:20.913419 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/2.log" Feb 26 08:15:20 crc kubenswrapper[4741]: E0226 08:15:20.929973 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.925966 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/2.log" Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.929861 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494"} Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.946825 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:21Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.963868 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:21Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.975526 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:21Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:21 crc kubenswrapper[4741]: I0226 08:15:21.999239 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:21Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.020201 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.042855 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.058010 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.076073 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.093845 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.115304 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.138734 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.160863 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.202082 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.244832 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.269941 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.301140 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.319026 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.338818 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.358306 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.786861 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.787020 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.787162 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.787055 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:22 crc kubenswrapper[4741]: E0226 08:15:22.787428 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:22 crc kubenswrapper[4741]: E0226 08:15:22.787790 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:22 crc kubenswrapper[4741]: E0226 08:15:22.788020 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:22 crc kubenswrapper[4741]: E0226 08:15:22.788224 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.940305 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/3.log" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.941863 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/2.log" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.947227 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" exitCode=1 Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.947299 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494"} Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.947374 4741 scope.go:117] "RemoveContainer" containerID="a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.948757 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:15:22 crc kubenswrapper[4741]: E0226 08:15:22.949061 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.976644 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:22 crc kubenswrapper[4741]: I0226 08:15:22.996282 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:22Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.018219 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.039035 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.062099 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.084737 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.103349 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.125342 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.162652 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.181588 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.194946 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.213506 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.233815 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.246489 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.265100 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.282827 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.305100 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a560c4885de9de98380561d373582e222eaecc88d3af3cbcdbfabb3a9060bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:14:54Z\\\",\\\"message\\\":\\\"ions:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:14:54.773017 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0226 08:14:54.773617 7060 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0226 08:14:54.773641 7060 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0226 08:14:54.773638 7060 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:22Z\\\",\\\"message\\\":\\\"org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:15:21.913926 7378 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 08:15:21.914044 7378 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.329826 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.343900 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:23Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:23 crc kubenswrapper[4741]: I0226 08:15:23.957383 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/3.log" Feb 26 08:15:24 crc kubenswrapper[4741]: I0226 08:15:24.786487 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:24 crc kubenswrapper[4741]: I0226 08:15:24.786487 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:24 crc kubenswrapper[4741]: I0226 08:15:24.786533 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:24 crc kubenswrapper[4741]: I0226 08:15:24.786556 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:24 crc kubenswrapper[4741]: E0226 08:15:24.787327 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:24 crc kubenswrapper[4741]: E0226 08:15:24.787469 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:24 crc kubenswrapper[4741]: E0226 08:15:24.787781 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:24 crc kubenswrapper[4741]: E0226 08:15:24.787891 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.162791 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.164365 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:15:25 crc kubenswrapper[4741]: E0226 08:15:25.164730 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.193594 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.215132 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.233221 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.254170 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.273811 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.297658 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.317715 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.339673 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.360829 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.381029 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.402650 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.437210 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:22Z\\\",\\\"message\\\":\\\"org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:15:21.913926 7378 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 08:15:21.914044 7378 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.467925 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.486579 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.514002 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.535350 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.555195 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.574623 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.598443 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.809541 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.833161 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.850772 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.874525 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.909234 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.930497 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: E0226 08:15:25.931222 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.954531 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.975489 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:25 crc kubenswrapper[4741]: I0226 08:15:25.998718 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:25Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.014041 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.033176 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.052843 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.079821 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:22Z\\\",\\\"message\\\":\\\"org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:15:21.913926 7378 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 08:15:21.914044 7378 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.104181 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.129100 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.149669 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.170477 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.185329 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.203853 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.786540 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.786554 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.786554 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:26 crc kubenswrapper[4741]: I0226 08:15:26.786553 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:26 crc kubenswrapper[4741]: E0226 08:15:26.787571 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:26 crc kubenswrapper[4741]: E0226 08:15:26.787713 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:26 crc kubenswrapper[4741]: E0226 08:15:26.787868 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:26 crc kubenswrapper[4741]: E0226 08:15:26.787979 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.157313 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.157374 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.157434 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.157464 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.157485 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:28Z","lastTransitionTime":"2026-02-26T08:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.179513 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.185473 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.185530 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.185538 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.185556 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.185567 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:28Z","lastTransitionTime":"2026-02-26T08:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.207416 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.212436 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.212465 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.212475 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.212490 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.212514 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:28Z","lastTransitionTime":"2026-02-26T08:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.234430 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.240708 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.240771 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.240785 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.240804 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.240817 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:28Z","lastTransitionTime":"2026-02-26T08:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.269621 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.275186 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.275230 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.275429 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.275468 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.275485 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:28Z","lastTransitionTime":"2026-02-26T08:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.289966 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.290794 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.620330 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.620554 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.620630 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.620577379 +0000 UTC m=+227.616514796 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.620734 4741 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.620844 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.620815446 +0000 UTC m=+227.616752853 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.621076 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.621172 4741 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.621215 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.621204906 +0000 UTC m=+227.617142313 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.722628 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.722678 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.722699 4741 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.722797 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.722769889 +0000 UTC m=+227.718707316 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.722416 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.723175 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723320 4741 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723391 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs podName:f2840647-3181-4a32-9386-b7f030bb9356 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.723368476 +0000 UTC m=+227.719305903 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs") pod "network-metrics-daemon-zlfsg" (UID: "f2840647-3181-4a32-9386-b7f030bb9356") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.723614 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723787 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723823 4741 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723840 4741 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.723898 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.72388199 +0000 UTC m=+227.719819407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.786778 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.786806 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.786885 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:28 crc kubenswrapper[4741]: I0226 08:15:28.786891 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.787662 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.787852 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.788173 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:28 crc kubenswrapper[4741]: E0226 08:15:28.788340 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:30 crc kubenswrapper[4741]: I0226 08:15:30.786475 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:30 crc kubenswrapper[4741]: I0226 08:15:30.786632 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:30 crc kubenswrapper[4741]: I0226 08:15:30.787266 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:30 crc kubenswrapper[4741]: I0226 08:15:30.787281 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:30 crc kubenswrapper[4741]: E0226 08:15:30.787425 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:30 crc kubenswrapper[4741]: E0226 08:15:30.787614 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:30 crc kubenswrapper[4741]: E0226 08:15:30.787752 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:30 crc kubenswrapper[4741]: E0226 08:15:30.788069 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:30 crc kubenswrapper[4741]: E0226 08:15:30.933029 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:32 crc kubenswrapper[4741]: I0226 08:15:32.787177 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:32 crc kubenswrapper[4741]: E0226 08:15:32.787726 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:32 crc kubenswrapper[4741]: I0226 08:15:32.787354 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:32 crc kubenswrapper[4741]: E0226 08:15:32.787837 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:32 crc kubenswrapper[4741]: I0226 08:15:32.787311 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:32 crc kubenswrapper[4741]: I0226 08:15:32.787381 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:32 crc kubenswrapper[4741]: E0226 08:15:32.787925 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:32 crc kubenswrapper[4741]: E0226 08:15:32.788493 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:34 crc kubenswrapper[4741]: I0226 08:15:34.787056 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:34 crc kubenswrapper[4741]: I0226 08:15:34.787139 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:34 crc kubenswrapper[4741]: I0226 08:15:34.787098 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:34 crc kubenswrapper[4741]: I0226 08:15:34.787229 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:34 crc kubenswrapper[4741]: E0226 08:15:34.787323 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:34 crc kubenswrapper[4741]: E0226 08:15:34.787496 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:34 crc kubenswrapper[4741]: E0226 08:15:34.787670 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:34 crc kubenswrapper[4741]: E0226 08:15:34.787812 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.805803 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.823789 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.843060 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.859226 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.881886 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.899508 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.922075 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: E0226 08:15:35.934103 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.957706 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.979858 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:35 crc kubenswrapper[4741]: I0226 08:15:35.995920 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.019091 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.038007 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.054653 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.068740 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.084463 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.108450 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:22Z\\\",\\\"message\\\":\\\"org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:15:21.913926 7378 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 08:15:21.914044 7378 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.129757 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.179436 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.195096 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.786535 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.786612 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.786751 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:36 crc kubenswrapper[4741]: I0226 08:15:36.786896 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:36 crc kubenswrapper[4741]: E0226 08:15:36.786874 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:36 crc kubenswrapper[4741]: E0226 08:15:36.787085 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:36 crc kubenswrapper[4741]: E0226 08:15:36.787327 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:36 crc kubenswrapper[4741]: E0226 08:15:36.787465 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.469911 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.469983 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.470067 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.470100 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.470196 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:38Z","lastTransitionTime":"2026-02-26T08:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.491626 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.497747 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.497812 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.497839 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.497873 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.497896 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:38Z","lastTransitionTime":"2026-02-26T08:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.517265 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.523286 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.523319 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.523331 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.523347 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.523363 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:38Z","lastTransitionTime":"2026-02-26T08:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.542262 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.546922 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.546957 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.546967 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.546982 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.546992 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:38Z","lastTransitionTime":"2026-02-26T08:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.561295 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.566419 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.566480 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.566503 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.566532 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.566554 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:38Z","lastTransitionTime":"2026-02-26T08:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.583449 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.583678 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.786690 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.786785 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.786880 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.786985 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.787048 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:38 crc kubenswrapper[4741]: I0226 08:15:38.787067 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.787333 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:38 crc kubenswrapper[4741]: E0226 08:15:38.787480 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:39 crc kubenswrapper[4741]: I0226 08:15:39.787397 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:15:39 crc kubenswrapper[4741]: E0226 08:15:39.787605 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:15:40 crc kubenswrapper[4741]: I0226 08:15:40.786938 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:40 crc kubenswrapper[4741]: I0226 08:15:40.787007 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:40 crc kubenswrapper[4741]: I0226 08:15:40.787021 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:40 crc kubenswrapper[4741]: I0226 08:15:40.787136 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:40 crc kubenswrapper[4741]: E0226 08:15:40.787135 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:40 crc kubenswrapper[4741]: E0226 08:15:40.787246 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:40 crc kubenswrapper[4741]: E0226 08:15:40.787459 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:40 crc kubenswrapper[4741]: E0226 08:15:40.787614 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:40 crc kubenswrapper[4741]: E0226 08:15:40.935971 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:42 crc kubenswrapper[4741]: I0226 08:15:42.786889 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:42 crc kubenswrapper[4741]: I0226 08:15:42.787039 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:42 crc kubenswrapper[4741]: E0226 08:15:42.787198 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:42 crc kubenswrapper[4741]: I0226 08:15:42.787295 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:42 crc kubenswrapper[4741]: E0226 08:15:42.787429 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:42 crc kubenswrapper[4741]: E0226 08:15:42.787524 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:42 crc kubenswrapper[4741]: I0226 08:15:42.787680 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:42 crc kubenswrapper[4741]: E0226 08:15:42.787746 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:44 crc kubenswrapper[4741]: I0226 08:15:44.786532 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:44 crc kubenswrapper[4741]: E0226 08:15:44.786699 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:44 crc kubenswrapper[4741]: I0226 08:15:44.786927 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:44 crc kubenswrapper[4741]: E0226 08:15:44.787026 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:44 crc kubenswrapper[4741]: I0226 08:15:44.787239 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:44 crc kubenswrapper[4741]: E0226 08:15:44.787318 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:44 crc kubenswrapper[4741]: I0226 08:15:44.787348 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:44 crc kubenswrapper[4741]: E0226 08:15:44.787489 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.810543 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1009a1b7-c5dc-447c-a731-f613d5ea6eaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a266639d5009e85f8ff32757c6941051a5e0e98c94356678be3f408a7650d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3dd22f1554f80b5a32f28d51a2e35e058bd993f0e767673415f4cc74d0307ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6a99d73d12adaeb6ed0218e1e8417b813351817d5edd75752c5d92f5a4184c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9febd17c2b4720f2c4d9a192b36f0f91483beb1ade7753aae3a13174040a8e84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.835440 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.868617 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:22Z\\\",\\\"message\\\":\\\"org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 08:15:21.913926 7378 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 08:15:21.914044 7378 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:15:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzbcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2w5nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.887481 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d7d48-c4b9-4725-bd12-32a95c02133f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fb91658cda654219746197660b18e120c92f459063ff0e6d0431f4c6cffd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97344b051f39630397566cae6195b58ec720adf537d671d7d20241519d947cb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7db5784b5f747bd6b726995bd60651ea10cc4fdb941a5af78efdeeaa30786c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e36e57d181a1260644e1b3f7036291af546b79d8e9992de21653ff68ee4799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33b3ee6ee6eb4fe387c09c07e8535e7da7ef28047ce5e5f69aa7c11b6d24d17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b87cea1e955004ac5a893e6fe777a59daea6531667a45106df0b89d90b382f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7a6e0ce76916c4aa50a9a84ed1edcc900a0bcaa1f2545e4a024bc542f29b355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:14:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hj9pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f5qkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.899323 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-869lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81747676-7ef1-403b-8315-96e475f06342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a0b97e14e150119bb4be6dc372df35164616481ea59c280a9442933384138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvrxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-869lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.923415 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ea009dd-67d3-42fb-bd36-bad6d9eadd31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:47Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 08:13:46.599453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 08:13:46.599631 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 08:13:46.600764 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2610456232/tls.crt::/tmp/serving-cert-2610456232/tls.key\\\\\\\"\\\\nI0226 08:13:47.008190 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 08:13:47.010319 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 08:13:47.010346 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 08:13:47.010374 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 08:13:47.010381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 08:13:47.014758 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 08:13:47.014820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 08:13:47.014842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 08:13:47.014849 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 08:13:47.014856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 08:13:47.014868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 08:13:47.014772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 08:13:47.018047 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:13:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: E0226 08:15:45.938965 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.944724 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124d6a5b-7ffa-4684-b0b4-2c6b75d42b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dfa2e8d48a306017193f749dd4a056a0326e0ab8a3fb927f4aa57eb01f57811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb28f59de71b186f6a766d8a667d451656e937e4e0319e00132bc198390b1516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nv9p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65fqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.964207 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e3069fc14fa7348774f69c5bdf35c2e389776f8418865f60d711aec8dce2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.979163 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2840647-3181-4a32-9386-b7f030bb9356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zlfsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:45 crc kubenswrapper[4741]: I0226 08:15:45.998782 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:45Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.018510 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36eb48365628d548f37b801aa661573e7a00f2c16ecc87dd9ed68ae65d731bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.036426 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c7b5b01-4061-4003-b002-a977260886c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b3f0ce3a846c0b7eba3f0ef03051890438fcdb8d06bbc627571deb3bcf61e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz5lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zqf2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.053503 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mzt8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd732e7-0e36-485f-b750-856d6869e697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T08:15:12Z\\\",\\\"message\\\":\\\"2026-02-26T08:14:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7\\\\n2026-02-26T08:14:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c652ee12-cc63-4558-8c1e-77d1f04f28c7 to /host/opt/cni/bin/\\\\n2026-02-26T08:14:27Z [verbose] multus-daemon started\\\\n2026-02-26T08:14:27Z [verbose] Readiness Indicator file check\\\\n2026-02-26T08:15:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlbm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mzt8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.088205 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c955ca4-e71c-474e-9850-18e5815ea445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6651b7b066cfb6320f48bea0d01e86854f975f80ac8af40eed041ac51abecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2701d043d832010159547124ccd80f573b3ec40f7026fcc5d5814c02864d077a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d2d00c33ecb888bad4d8099db424712d33f7980cc90f4eb398ffb6020548a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5adc5002cb64cbb8f41271df8b506a043d82f75f3484ee483065d26242efaed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1adb81ed7dd7307050b13477ab3abd2ac58ed0196865fe4c1cabdc028f71edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b79c24df73e787302789a4fd825f882849f3920f01de037560c20716f1afb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c60da1de46246ddb7ade04c69f5bbe6493001fef071f3c4c445094f1518da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9d358a2e5be67fbfd58135af87896ede30f79a5e879729ae66b0585d68b97a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.111106 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda7a3ac-addf-4c1f-a39c-2e6d0dd57b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:13:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f37351e10c025a82957f2d8c27749be49476cdcd05f1f6619723c72099dd01dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0aa222844dd164999ed2f0b14b64239cb6004c7f56bef4ca86b6adcd3ac71da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T08:13:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 08:12:48.093165 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 08:12:48.095259 1 observer_polling.go:159] Starting file observer\\\\nI0226 08:12:48.142891 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 08:12:48.147138 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 08:13:18.073143 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 08:13:18.073352 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc93b4aac2624a69b157adf7a2bdc7a34a168ab88b2a7d6c9c5d2f81ac9f8ee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f31097b46a80af3a1e40a888533457161d716ad5527e9455ce3734faab6213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.130038 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd51b3f-a150-4aff-9234-91863047c6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf389edc547befcb611cfddd4d4d8a1b26b08bb4ebeb4b7724956e9e285800f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://037dc79b25cbfce8665a756c011a6a46bea8cf4938ab77933fb2026330274b9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T08:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T08:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:12:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.152309 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.174262 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5683d6180882e3f141418cff51ce7b718bac7fedbd04d0fef5ac666acc99737c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5a173032e06a5b932066c34d6441ccd0f5df7af74a39963a5fd1a449f1bd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.194315 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjwp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52383bb-5c8d-4eef-9df8-93143a9326d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T08:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4e7e657781db80b005f3ce5f246ac90763b9c8023c4cc3289d4cda843c26959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T08:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpwdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T08:14:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjwp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:46Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.786780 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.786851 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:46 crc kubenswrapper[4741]: E0226 08:15:46.787387 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.787020 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:46 crc kubenswrapper[4741]: I0226 08:15:46.786864 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:46 crc kubenswrapper[4741]: E0226 08:15:46.787593 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:46 crc kubenswrapper[4741]: E0226 08:15:46.787834 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:46 crc kubenswrapper[4741]: E0226 08:15:46.788080 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.712762 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.712815 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.712828 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.712852 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.712871 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:48Z","lastTransitionTime":"2026-02-26T08:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.730622 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:48Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.735377 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.735433 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.735451 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.735473 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.735490 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:48Z","lastTransitionTime":"2026-02-26T08:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.755450 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:48Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.760308 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.760375 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.760389 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.760410 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.760422 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:48Z","lastTransitionTime":"2026-02-26T08:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.776219 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:48Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.780982 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.781034 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.781053 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.781079 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.781097 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:48Z","lastTransitionTime":"2026-02-26T08:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.786068 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.786126 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.786139 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.786227 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.786074 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.786310 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.786430 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.786493 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.798300 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:48Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.802629 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.802681 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.802700 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.802724 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:48 crc kubenswrapper[4741]: I0226 08:15:48.802740 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:48Z","lastTransitionTime":"2026-02-26T08:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.820618 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T08:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b187775-8409-4c81-b985-3b98d85603dc\\\",\\\"systemUUID\\\":\\\"76b81fca-617e-45c9-86c2-b22f80bbe1d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T08:15:48Z is after 2025-08-24T17:21:41Z" Feb 26 08:15:48 crc kubenswrapper[4741]: E0226 08:15:48.820846 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:15:50 crc kubenswrapper[4741]: I0226 08:15:50.786472 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:50 crc kubenswrapper[4741]: I0226 08:15:50.786527 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:50 crc kubenswrapper[4741]: I0226 08:15:50.786472 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:50 crc kubenswrapper[4741]: E0226 08:15:50.786888 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:50 crc kubenswrapper[4741]: I0226 08:15:50.786940 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:50 crc kubenswrapper[4741]: E0226 08:15:50.787179 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:50 crc kubenswrapper[4741]: E0226 08:15:50.787337 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:50 crc kubenswrapper[4741]: E0226 08:15:50.787883 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:50 crc kubenswrapper[4741]: E0226 08:15:50.941319 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:52 crc kubenswrapper[4741]: I0226 08:15:52.787201 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:52 crc kubenswrapper[4741]: I0226 08:15:52.787326 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:52 crc kubenswrapper[4741]: I0226 08:15:52.787216 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:52 crc kubenswrapper[4741]: I0226 08:15:52.787464 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:52 crc kubenswrapper[4741]: E0226 08:15:52.787612 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:52 crc kubenswrapper[4741]: E0226 08:15:52.787459 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:52 crc kubenswrapper[4741]: E0226 08:15:52.787768 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:52 crc kubenswrapper[4741]: E0226 08:15:52.787953 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:53 crc kubenswrapper[4741]: I0226 08:15:53.788655 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:15:53 crc kubenswrapper[4741]: E0226 08:15:53.788990 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2w5nl_openshift-ovn-kubernetes(1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" Feb 26 08:15:54 crc kubenswrapper[4741]: I0226 08:15:54.786419 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:54 crc kubenswrapper[4741]: I0226 08:15:54.786499 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:54 crc kubenswrapper[4741]: I0226 08:15:54.786447 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:54 crc kubenswrapper[4741]: E0226 08:15:54.786632 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:54 crc kubenswrapper[4741]: I0226 08:15:54.786558 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:54 crc kubenswrapper[4741]: E0226 08:15:54.786820 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:54 crc kubenswrapper[4741]: E0226 08:15:54.787063 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:54 crc kubenswrapper[4741]: E0226 08:15:54.787311 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:55 crc kubenswrapper[4741]: I0226 08:15:55.820737 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=68.820714065 podStartE2EDuration="1m8.820714065s" podCreationTimestamp="2026-02-26 08:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:55.81981598 +0000 UTC m=+190.815753387" watchObservedRunningTime="2026-02-26 08:15:55.820714065 +0000 UTC m=+190.816651462" Feb 26 08:15:55 crc kubenswrapper[4741]: I0226 08:15:55.900197 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f5qkr" podStartSLOduration=136.900164106 podStartE2EDuration="2m16.900164106s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:55.900055243 +0000 UTC m=+190.895992640" watchObservedRunningTime="2026-02-26 08:15:55.900164106 +0000 UTC m=+190.896101533" Feb 26 08:15:55 crc kubenswrapper[4741]: I0226 08:15:55.917160 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-869lw" podStartSLOduration=136.917131753 podStartE2EDuration="2m16.917131753s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:55.917069691 +0000 UTC m=+190.913007088" watchObservedRunningTime="2026-02-26 08:15:55.917131753 +0000 UTC m=+190.913069140" Feb 26 08:15:55 crc kubenswrapper[4741]: I0226 08:15:55.938343 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.938316008 podStartE2EDuration="1m26.938316008s" podCreationTimestamp="2026-02-26 08:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:55.937898126 +0000 UTC m=+190.933835553" watchObservedRunningTime="2026-02-26 08:15:55.938316008 +0000 UTC m=+190.934253405" Feb 26 08:15:55 crc kubenswrapper[4741]: E0226 08:15:55.941958 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:15:55 crc kubenswrapper[4741]: I0226 08:15:55.954402 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65fqt" podStartSLOduration=136.954377589 podStartE2EDuration="2m16.954377589s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:55.954350068 +0000 UTC m=+190.950287465" watchObservedRunningTime="2026-02-26 08:15:55.954377589 +0000 UTC m=+190.950314986" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.028494 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podStartSLOduration=137.0284703 podStartE2EDuration="2m17.0284703s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.012157832 +0000 UTC m=+191.008095229" watchObservedRunningTime="2026-02-26 08:15:56.0284703 +0000 UTC m=+191.024407687" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.029065 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mzt8d" podStartSLOduration=137.029059907 podStartE2EDuration="2m17.029059907s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.027223835 +0000 UTC m=+191.023161232" watchObservedRunningTime="2026-02-26 08:15:56.029059907 +0000 UTC m=+191.024997294" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.055506 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=40.055482529 podStartE2EDuration="40.055482529s" podCreationTimestamp="2026-02-26 08:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.054682217 +0000 UTC m=+191.050619624" watchObservedRunningTime="2026-02-26 08:15:56.055482529 +0000 UTC m=+191.051419916" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.068422 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=53.068399632 podStartE2EDuration="53.068399632s" podCreationTimestamp="2026-02-26 08:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.067813255 +0000 UTC m=+191.063750652" watchObservedRunningTime="2026-02-26 08:15:56.068399632 +0000 UTC m=+191.064337009" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.077963 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=38.07794589 podStartE2EDuration="38.07794589s" podCreationTimestamp="2026-02-26 08:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.077156478 +0000 UTC m=+191.073093865" watchObservedRunningTime="2026-02-26 08:15:56.07794589 +0000 UTC m=+191.073883277" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.128878 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bjwp7" podStartSLOduration=137.12885639 podStartE2EDuration="2m17.12885639s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:15:56.127916604 +0000 UTC m=+191.123854031" watchObservedRunningTime="2026-02-26 08:15:56.12885639 +0000 UTC m=+191.124793797" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.787082 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:56 crc kubenswrapper[4741]: E0226 08:15:56.787618 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.787182 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:56 crc kubenswrapper[4741]: E0226 08:15:56.787847 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.787241 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:56 crc kubenswrapper[4741]: E0226 08:15:56.788059 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:56 crc kubenswrapper[4741]: I0226 08:15:56.787092 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:56 crc kubenswrapper[4741]: E0226 08:15:56.788500 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.786634 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.786700 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.786762 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.786700 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:15:58 crc kubenswrapper[4741]: E0226 08:15:58.786829 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:15:58 crc kubenswrapper[4741]: E0226 08:15:58.787051 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:15:58 crc kubenswrapper[4741]: E0226 08:15:58.787219 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:15:58 crc kubenswrapper[4741]: E0226 08:15:58.787288 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.956075 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.956139 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.956158 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.956185 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 08:15:58 crc kubenswrapper[4741]: I0226 08:15:58.956199 4741 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T08:15:58Z","lastTransitionTime":"2026-02-26T08:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.018252 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t"] Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.018850 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.021374 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.021463 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.022655 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.022789 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.067914 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.067979 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/faa2899a-6ddb-4f5e-9a88-c8b270988b65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.068224 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.068396 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faa2899a-6ddb-4f5e-9a88-c8b270988b65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.068460 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa2899a-6ddb-4f5e-9a88-c8b270988b65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.168850 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.168916 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faa2899a-6ddb-4f5e-9a88-c8b270988b65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.168937 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa2899a-6ddb-4f5e-9a88-c8b270988b65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.168963 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.168984 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/faa2899a-6ddb-4f5e-9a88-c8b270988b65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.169017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.169439 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/faa2899a-6ddb-4f5e-9a88-c8b270988b65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.170092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/faa2899a-6ddb-4f5e-9a88-c8b270988b65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.181084 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa2899a-6ddb-4f5e-9a88-c8b270988b65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.185944 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faa2899a-6ddb-4f5e-9a88-c8b270988b65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p497t\" (UID: \"faa2899a-6ddb-4f5e-9a88-c8b270988b65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.345692 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" Feb 26 08:15:59 crc kubenswrapper[4741]: W0226 08:15:59.375595 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaa2899a_6ddb_4f5e_9a88_c8b270988b65.slice/crio-43795cbf04aeec28b359533220b72753d034250f98d1872ae0f03731b5c40f10 WatchSource:0}: Error finding container 43795cbf04aeec28b359533220b72753d034250f98d1872ae0f03731b5c40f10: Status 404 returned error can't find the container with id 43795cbf04aeec28b359533220b72753d034250f98d1872ae0f03731b5c40f10 Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.877536 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 08:15:59 crc kubenswrapper[4741]: I0226 08:15:59.888428 4741 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.111692 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/1.log" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.112269 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/0.log" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.112327 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fd732e7-0e36-485f-b750-856d6869e697" containerID="8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10" exitCode=1 Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.112380 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerDied","Data":"8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10"} Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.112664 4741 scope.go:117] "RemoveContainer" containerID="991ea9f9876e5dd478f273cefe292defa87446888f90979da56628899ded42cc" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.112982 4741 scope.go:117] "RemoveContainer" containerID="8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.113288 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mzt8d_openshift-multus(3fd732e7-0e36-485f-b750-856d6869e697)\"" pod="openshift-multus/multus-mzt8d" podUID="3fd732e7-0e36-485f-b750-856d6869e697" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.115342 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" event={"ID":"faa2899a-6ddb-4f5e-9a88-c8b270988b65","Type":"ContainerStarted","Data":"f2bb48a6c5be17543f6e0e321e11e75e6c17b9d27f0701477d999dfef44141f1"} Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.115430 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" event={"ID":"faa2899a-6ddb-4f5e-9a88-c8b270988b65","Type":"ContainerStarted","Data":"43795cbf04aeec28b359533220b72753d034250f98d1872ae0f03731b5c40f10"} Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.157304 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p497t" podStartSLOduration=141.15728084 podStartE2EDuration="2m21.15728084s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:00.156865798 +0000 UTC m=+195.152803175" watchObservedRunningTime="2026-02-26 08:16:00.15728084 +0000 UTC m=+195.153218227" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.786373 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.786508 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.786516 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:00 crc kubenswrapper[4741]: I0226 08:16:00.786700 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.786693 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.786926 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.787025 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.787193 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:00 crc kubenswrapper[4741]: E0226 08:16:00.943787 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:16:01 crc kubenswrapper[4741]: I0226 08:16:01.121249 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/1.log" Feb 26 08:16:02 crc kubenswrapper[4741]: I0226 08:16:02.786489 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:02 crc kubenswrapper[4741]: I0226 08:16:02.786551 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:02 crc kubenswrapper[4741]: E0226 08:16:02.787059 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:02 crc kubenswrapper[4741]: I0226 08:16:02.786564 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:02 crc kubenswrapper[4741]: E0226 08:16:02.787258 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:02 crc kubenswrapper[4741]: I0226 08:16:02.786587 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:02 crc kubenswrapper[4741]: E0226 08:16:02.787485 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:02 crc kubenswrapper[4741]: E0226 08:16:02.787620 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:04 crc kubenswrapper[4741]: I0226 08:16:04.786834 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:04 crc kubenswrapper[4741]: I0226 08:16:04.786896 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:04 crc kubenswrapper[4741]: I0226 08:16:04.786858 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:04 crc kubenswrapper[4741]: E0226 08:16:04.787005 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:04 crc kubenswrapper[4741]: I0226 08:16:04.787070 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:04 crc kubenswrapper[4741]: E0226 08:16:04.787093 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:04 crc kubenswrapper[4741]: E0226 08:16:04.787301 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:04 crc kubenswrapper[4741]: E0226 08:16:04.787508 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:05 crc kubenswrapper[4741]: E0226 08:16:05.944749 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:16:06 crc kubenswrapper[4741]: I0226 08:16:06.786173 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:06 crc kubenswrapper[4741]: I0226 08:16:06.786269 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:06 crc kubenswrapper[4741]: I0226 08:16:06.786355 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:06 crc kubenswrapper[4741]: E0226 08:16:06.786378 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:06 crc kubenswrapper[4741]: E0226 08:16:06.786662 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:06 crc kubenswrapper[4741]: E0226 08:16:06.786824 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:06 crc kubenswrapper[4741]: I0226 08:16:06.786945 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:06 crc kubenswrapper[4741]: E0226 08:16:06.787065 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:08 crc kubenswrapper[4741]: I0226 08:16:08.786723 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:08 crc kubenswrapper[4741]: I0226 08:16:08.786798 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:08 crc kubenswrapper[4741]: I0226 08:16:08.786835 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:08 crc kubenswrapper[4741]: I0226 08:16:08.786838 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:08 crc kubenswrapper[4741]: E0226 08:16:08.786975 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:08 crc kubenswrapper[4741]: E0226 08:16:08.787285 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:08 crc kubenswrapper[4741]: E0226 08:16:08.787462 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:08 crc kubenswrapper[4741]: E0226 08:16:08.787594 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:08 crc kubenswrapper[4741]: I0226 08:16:08.788941 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:16:09 crc kubenswrapper[4741]: I0226 08:16:09.158803 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/3.log" Feb 26 08:16:09 crc kubenswrapper[4741]: I0226 08:16:09.164489 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerStarted","Data":"3ca2b6a31da3bc8a7cce80daf6a680137b42b6140c6bd5152a84d658a4126507"} Feb 26 08:16:09 crc kubenswrapper[4741]: I0226 08:16:09.165412 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:16:09 crc kubenswrapper[4741]: I0226 08:16:09.204770 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podStartSLOduration=150.204739854 podStartE2EDuration="2m30.204739854s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:09.204308092 +0000 UTC m=+204.200245519" watchObservedRunningTime="2026-02-26 08:16:09.204739854 +0000 UTC m=+204.200677281" Feb 26 08:16:10 crc kubenswrapper[4741]: I0226 08:16:10.420208 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zlfsg"] Feb 26 08:16:10 crc kubenswrapper[4741]: I0226 08:16:10.420479 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:10 crc kubenswrapper[4741]: E0226 08:16:10.420676 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:10 crc kubenswrapper[4741]: I0226 08:16:10.787154 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:10 crc kubenswrapper[4741]: E0226 08:16:10.787610 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:10 crc kubenswrapper[4741]: I0226 08:16:10.787376 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:10 crc kubenswrapper[4741]: E0226 08:16:10.787700 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:10 crc kubenswrapper[4741]: I0226 08:16:10.787159 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:10 crc kubenswrapper[4741]: E0226 08:16:10.787813 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:10 crc kubenswrapper[4741]: E0226 08:16:10.947090 4741 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:16:11 crc kubenswrapper[4741]: I0226 08:16:11.787205 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:11 crc kubenswrapper[4741]: E0226 08:16:11.787472 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:11 crc kubenswrapper[4741]: I0226 08:16:11.787993 4741 scope.go:117] "RemoveContainer" containerID="8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10" Feb 26 08:16:12 crc kubenswrapper[4741]: I0226 08:16:12.179811 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/1.log" Feb 26 08:16:12 crc kubenswrapper[4741]: I0226 08:16:12.179985 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerStarted","Data":"81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6"} Feb 26 08:16:12 crc kubenswrapper[4741]: I0226 08:16:12.787167 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:12 crc kubenswrapper[4741]: I0226 08:16:12.787197 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:12 crc kubenswrapper[4741]: E0226 08:16:12.787569 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:12 crc kubenswrapper[4741]: E0226 08:16:12.787660 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:12 crc kubenswrapper[4741]: I0226 08:16:12.787209 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:12 crc kubenswrapper[4741]: E0226 08:16:12.789054 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:13 crc kubenswrapper[4741]: I0226 08:16:13.787059 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:13 crc kubenswrapper[4741]: E0226 08:16:13.787335 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:14 crc kubenswrapper[4741]: I0226 08:16:14.786927 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:14 crc kubenswrapper[4741]: I0226 08:16:14.786949 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:14 crc kubenswrapper[4741]: E0226 08:16:14.787212 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 08:16:14 crc kubenswrapper[4741]: E0226 08:16:14.787333 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 08:16:14 crc kubenswrapper[4741]: I0226 08:16:14.788371 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:14 crc kubenswrapper[4741]: E0226 08:16:14.788719 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 08:16:15 crc kubenswrapper[4741]: I0226 08:16:15.786447 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:15 crc kubenswrapper[4741]: E0226 08:16:15.789269 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zlfsg" podUID="f2840647-3181-4a32-9386-b7f030bb9356" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.786488 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.786550 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.786562 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.789753 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.790037 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.790203 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 08:16:16 crc kubenswrapper[4741]: I0226 08:16:16.791823 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 08:16:17 crc kubenswrapper[4741]: I0226 08:16:17.786931 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:17 crc kubenswrapper[4741]: I0226 08:16:17.790475 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 08:16:17 crc kubenswrapper[4741]: I0226 08:16:17.790484 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.380450 4741 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.440193 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-llf79"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.441166 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nhrbh"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.441495 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.442800 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.443307 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.443680 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.443827 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.444645 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.446859 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.447861 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.448749 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.449821 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.449947 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.450272 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.450721 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.451259 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.454940 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-42f2w"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.458460 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.460449 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.460763 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.464947 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.465505 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.465661 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.465765 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.465938 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.466040 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.466188 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.466206 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.466268 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.465981 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.466637 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.467158 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.467354 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.470336 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ptx5j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.470782 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.470897 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.471172 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.471391 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.471614 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.472779 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.472795 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.474849 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.479618 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.479630 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.480195 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.479708 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.479782 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.479834 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.484869 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-thvbc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.515387 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.516506 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.517337 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.518907 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.519130 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.520316 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.520632 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.520735 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.520895 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.520955 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.521155 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.521254 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.523179 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.524102 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.524390 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.524583 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.525464 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.525721 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.526007 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.540086 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.540350 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.540414 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.540781 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.540903 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.541128 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543297 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-serving-cert\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543349 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543380 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-image-import-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543424 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543448 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-serving-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543473 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543499 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543526 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz9xq\" (UniqueName: \"kubernetes.io/projected/439540a4-1c57-4c48-81ba-842dc3d88804-kube-api-access-gz9xq\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543561 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543590 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543633 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-config\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543660 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543687 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvn4m\" (UniqueName: \"kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543710 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-client\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543736 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-encryption-config\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543765 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fj7k\" (UniqueName: \"kubernetes.io/projected/8c718e96-1e2d-41e8-beff-d68534e49add-kube-api-access-8fj7k\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543789 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c718e96-1e2d-41e8-beff-d68534e49add-audit-dir\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543815 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d5a-b472-4fb9-9905-e22f985be009-machine-approver-tls\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543837 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543862 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d828q\" (UniqueName: \"kubernetes.io/projected/59420a86-a033-4cbe-98bf-3ec780191ed6-kube-api-access-d828q\") pod \"downloads-7954f5f757-ptx5j\" (UID: \"59420a86-a033-4cbe-98bf-3ec780191ed6\") " pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543901 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543924 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.543974 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-node-pullsecrets\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544013 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-serving-cert\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544039 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-encryption-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544078 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfl6l\" (UniqueName: \"kubernetes.io/projected/d33f8d5a-b472-4fb9-9905-e22f985be009-kube-api-access-qfl6l\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544148 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-audit-policies\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544174 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544197 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544227 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544245 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hn57j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544581 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544716 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544802 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bh9dr"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544859 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544251 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4r9s\" (UniqueName: \"kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.544985 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545011 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545036 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545060 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545070 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545131 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545156 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-images\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545193 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/439540a4-1c57-4c48-81ba-842dc3d88804-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545219 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545229 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545244 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfnl\" (UniqueName: \"kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545268 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-auth-proxy-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545296 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545321 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545358 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79be0654-3564-4cd1-87f7-e9eb1c972bbd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545381 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dl9p\" (UniqueName: \"kubernetes.io/projected/79be0654-3564-4cd1-87f7-e9eb1c972bbd-kube-api-access-4dl9p\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545413 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit-dir\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545435 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwnp\" (UniqueName: \"kubernetes.io/projected/5fdadf1f-38a7-41a9-ab52-e750457f3e00-kube-api-access-ghwnp\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545459 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545485 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545509 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545547 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545569 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-client\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.545785 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.546483 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.546671 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.548387 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.549612 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.550545 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.550862 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.551028 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.556447 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.556723 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.556885 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.557268 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.558208 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.558791 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.560269 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.564187 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.564337 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jb7x"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.567933 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.568527 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.569148 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.570427 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.570763 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.573100 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.571043 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.571131 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.573281 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.571172 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.571253 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.573451 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.571323 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.574034 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.574637 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.574879 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.575038 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.577529 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.578537 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.579211 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.579394 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.580093 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9zw77"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.580179 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.580672 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.581478 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.581626 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.582084 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.582527 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.583859 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.583902 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.583913 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.584850 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.585034 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.585776 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.586041 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.586219 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.604637 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.605123 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.605429 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.605651 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.606204 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.606253 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.611306 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.612348 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.624742 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.626247 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.626424 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.627084 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.629241 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.629326 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.629797 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.630156 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.630328 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.630580 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.631036 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.632345 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nrk4h"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.633805 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.634286 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.634842 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.635260 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.635780 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.636299 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qcpm7"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.636774 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.637018 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.643522 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.643640 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.643839 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.644045 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.644129 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.644479 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.644721 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.644898 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.645044 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.645308 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nhrbh"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.645387 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646380 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646420 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-serving-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646495 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646522 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646548 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-config\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646577 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646605 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz9xq\" (UniqueName: \"kubernetes.io/projected/439540a4-1c57-4c48-81ba-842dc3d88804-kube-api-access-gz9xq\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646633 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.646701 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-service-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647167 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647464 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647686 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-serving-cert\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647730 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-config\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647760 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-encryption-config\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647790 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-serving-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647807 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvn4m\" (UniqueName: \"kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647832 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-client\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647864 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/87ce91fe-866c-44e7-8c94-f3d7a994cc75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647896 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fj7k\" (UniqueName: \"kubernetes.io/projected/8c718e96-1e2d-41e8-beff-d68534e49add-kube-api-access-8fj7k\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647922 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647925 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.647966 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c718e96-1e2d-41e8-beff-d68534e49add-audit-dir\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d5a-b472-4fb9-9905-e22f985be009-machine-approver-tls\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648037 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648061 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d828q\" (UniqueName: \"kubernetes.io/projected/59420a86-a033-4cbe-98bf-3ec780191ed6-kube-api-access-d828q\") pod \"downloads-7954f5f757-ptx5j\" (UID: \"59420a86-a033-4cbe-98bf-3ec780191ed6\") " pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648086 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648126 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648139 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648176 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648212 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-node-pullsecrets\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648264 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648293 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-serving-cert\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648311 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648329 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-config\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648349 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648372 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjfn8\" (UniqueName: \"kubernetes.io/projected/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-kube-api-access-xjfn8\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648397 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-audit-policies\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648419 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648438 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-encryption-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648460 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfl6l\" (UniqueName: \"kubernetes.io/projected/d33f8d5a-b472-4fb9-9905-e22f985be009-kube-api-access-qfl6l\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648483 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-trusted-ca\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648503 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb8vw\" (UniqueName: \"kubernetes.io/projected/1a8d4594-ac99-42e7-bfee-f526419ee990-kube-api-access-kb8vw\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648526 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlxgn\" (UniqueName: \"kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648557 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648600 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4r9s\" (UniqueName: \"kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648625 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkps9\" (UniqueName: \"kubernetes.io/projected/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-kube-api-access-gkps9\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648650 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648662 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648667 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648690 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87ce91fe-866c-44e7-8c94-f3d7a994cc75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648712 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c718e96-1e2d-41e8-beff-d68534e49add-audit-dir\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648715 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648809 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648842 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648873 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-images\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648936 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfc89\" (UniqueName: \"kubernetes.io/projected/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-kube-api-access-qfc89\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648959 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/439540a4-1c57-4c48-81ba-842dc3d88804-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.648980 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649002 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649023 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8qh\" (UniqueName: \"kubernetes.io/projected/4a4923cd-a652-4027-9945-5b20f94b0fff-kube-api-access-pf8qh\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649044 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-config\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649063 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-client\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649086 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdfnl\" (UniqueName: \"kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649128 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-auth-proxy-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649158 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htmr\" (UniqueName: \"kubernetes.io/projected/c7a6edd5-0d0d-431a-9884-af988d7db265-kube-api-access-5htmr\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649182 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649202 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649237 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.649900 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.650366 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.651049 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-images\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.652814 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-encryption-config\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.653223 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.653453 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-llf79"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.653647 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.653661 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be0654-3564-4cd1-87f7-e9eb1c972bbd-config\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.653721 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.654005 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.654490 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.654785 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5tpv5"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.654840 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655184 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655388 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79be0654-3564-4cd1-87f7-e9eb1c972bbd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655415 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dl9p\" (UniqueName: \"kubernetes.io/projected/79be0654-3564-4cd1-87f7-e9eb1c972bbd-kube-api-access-4dl9p\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655437 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-config\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655460 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655479 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp84w\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-kube-api-access-fp84w\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655684 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655722 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit-dir\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655743 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwnp\" (UniqueName: \"kubernetes.io/projected/5fdadf1f-38a7-41a9-ab52-e750457f3e00-kube-api-access-ghwnp\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655766 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655787 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655807 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wr6\" (UniqueName: \"kubernetes.io/projected/14003e7a-87ba-4ef7-8817-96288f162752-kube-api-access-z6wr6\") pod \"migrator-59844c95c7-xmgrw\" (UID: \"14003e7a-87ba-4ef7-8817-96288f162752\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655830 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-service-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655830 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d5a-b472-4fb9-9905-e22f985be009-machine-approver-tls\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655897 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a8d4594-ac99-42e7-bfee-f526419ee990-metrics-tls\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655850 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.655918 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-serving-cert\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656196 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-audit-dir\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656227 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fdadf1f-38a7-41a9-ab52-e750457f3e00-node-pullsecrets\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656283 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656332 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656370 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-client\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656402 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656444 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656479 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a6edd5-0d0d-431a-9884-af988d7db265-serving-cert\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656509 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4923cd-a652-4027-9945-5b20f94b0fff-serving-cert\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656551 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-serving-cert\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656583 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-image-import-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656665 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656762 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656939 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.656981 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-etcd-client\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.657265 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.657750 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fdadf1f-38a7-41a9-ab52-e750457f3e00-image-import-ca\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.657855 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.658317 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.658423 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.658488 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.658566 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d33f8d5a-b472-4fb9-9905-e22f985be009-auth-proxy-config\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.658730 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-encryption-config\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.659136 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.659853 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661227 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c718e96-1e2d-41e8-beff-d68534e49add-audit-policies\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661481 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-serving-cert\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661570 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661653 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661787 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661860 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdadf1f-38a7-41a9-ab52-e750457f3e00-serving-cert\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661848 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79be0654-3564-4cd1-87f7-e9eb1c972bbd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.661888 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.663198 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c718e96-1e2d-41e8-beff-d68534e49add-etcd-client\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.663643 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.664413 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.665522 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.665614 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.665904 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.668205 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.672282 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/439540a4-1c57-4c48-81ba-842dc3d88804-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.673323 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-thvbc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.674982 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.676995 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.678296 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.680122 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.680927 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hn57j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.681877 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.689590 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jb7x"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.689914 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.698843 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.701314 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.702726 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.704203 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.704847 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ptx5j"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.706134 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534896-rcrbz"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.707173 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.707554 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.708501 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.709037 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.710134 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2264l"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.710979 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.711323 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9zw77"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.712434 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.713486 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-85c82"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.714669 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-85c82" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.715051 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.716450 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.719164 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.720398 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.721739 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-42f2w"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.723058 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.723564 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.724392 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.725629 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bh9dr"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.726832 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.727889 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.729018 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534896-rcrbz"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.730336 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qcpm7"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.731523 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.732670 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.733730 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8m2b9"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.734698 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.734891 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mt4jv"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.736605 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.736716 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.737664 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-85c82"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.739093 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mt4jv"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.740279 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2264l"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.741522 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5tpv5"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.743779 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.745057 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.746483 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.747801 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx"] Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.750258 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757468 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-stats-auth\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757518 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eb5f5b-a203-4076-97e2-8854cf7a22bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757555 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-srv-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757590 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8qh\" (UniqueName: \"kubernetes.io/projected/4a4923cd-a652-4027-9945-5b20f94b0fff-kube-api-access-pf8qh\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757614 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-client\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757636 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfc89\" (UniqueName: \"kubernetes.io/projected/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-kube-api-access-qfc89\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757654 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-profile-collector-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757699 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htmr\" (UniqueName: \"kubernetes.io/projected/c7a6edd5-0d0d-431a-9884-af988d7db265-kube-api-access-5htmr\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757723 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-images\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757743 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14ecdcf-bbd4-4088-b767-19deee670b4c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757762 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757800 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757840 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59h4\" (UniqueName: \"kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757888 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a8d4594-ac99-42e7-bfee-f526419ee990-metrics-tls\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757916 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-serving-cert\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757941 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81eb5f5b-a203-4076-97e2-8854cf7a22bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.757969 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7xjl\" (UniqueName: \"kubernetes.io/projected/9eef37bd-7913-4e1c-baf0-775f14f6e18a-kube-api-access-b7xjl\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758001 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72tt\" (UniqueName: \"kubernetes.io/projected/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-kube-api-access-z72tt\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758057 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-default-certificate\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758088 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2e8bf27-ab59-474a-a502-f214701d5208-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758134 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758165 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758353 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758418 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-config\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758448 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-service-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758474 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxrq9\" (UniqueName: \"kubernetes.io/projected/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-kube-api-access-qxrq9\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758501 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-serving-cert\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758521 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758550 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3e06d-6012-4f0e-9425-91836b431b5b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758738 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758787 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758884 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktlhl\" (UniqueName: \"kubernetes.io/projected/66b44309-5cc3-4f00-bb25-0b0ef360e06e-kube-api-access-ktlhl\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758911 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-cabundle\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758938 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.758987 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-trusted-ca\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb8vw\" (UniqueName: \"kubernetes.io/projected/1a8d4594-ac99-42e7-bfee-f526419ee990-kube-api-access-kb8vw\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759041 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlxgn\" (UniqueName: \"kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759078 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-metrics-certs\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759152 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkps9\" (UniqueName: \"kubernetes.io/projected/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-kube-api-access-gkps9\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759166 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-service-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759180 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759187 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dcr2\" (UniqueName: \"kubernetes.io/projected/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-kube-api-access-9dcr2\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759280 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlpj\" (UniqueName: \"kubernetes.io/projected/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-kube-api-access-rzlpj\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759351 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-proxy-tls\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759388 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759458 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcwl4\" (UniqueName: \"kubernetes.io/projected/d14ecdcf-bbd4-4088-b767-19deee670b4c-kube-api-access-lcwl4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759514 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e8bf27-ab59-474a-a502-f214701d5208-config\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759552 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759605 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-config\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759668 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fs2\" (UniqueName: \"kubernetes.io/projected/08700e8f-7c54-4660-a65f-1871db5bbe5e-kube-api-access-f6fs2\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759764 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjtw\" (UniqueName: \"kubernetes.io/projected/ccea6218-4c8e-45dd-890f-5f9fd1806c99-kube-api-access-6tjtw\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759800 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-config\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759818 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759834 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759855 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp84w\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-kube-api-access-fp84w\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759923 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49h4\" (UniqueName: \"kubernetes.io/projected/f00ebe1b-d72e-48e7-8a75-17f830a53378-kube-api-access-c49h4\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6wr6\" (UniqueName: \"kubernetes.io/projected/14003e7a-87ba-4ef7-8817-96288f162752-kube-api-access-z6wr6\") pod \"migrator-59844c95c7-xmgrw\" (UID: \"14003e7a-87ba-4ef7-8817-96288f162752\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.759988 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-service-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760009 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760026 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eb5f5b-a203-4076-97e2-8854cf7a22bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760046 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760069 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6xk\" (UniqueName: \"kubernetes.io/projected/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-kube-api-access-jx6xk\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760147 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a6edd5-0d0d-431a-9884-af988d7db265-serving-cert\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760166 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3e06d-6012-4f0e-9425-91836b431b5b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760185 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4923cd-a652-4027-9945-5b20f94b0fff-serving-cert\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760214 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-service-ca-bundle\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760232 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tgc\" (UniqueName: \"kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760251 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760269 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760316 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-key\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760336 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14ecdcf-bbd4-4088-b767-19deee670b4c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760354 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rf9\" (UniqueName: \"kubernetes.io/projected/4fd3e06d-6012-4f0e-9425-91836b431b5b-kube-api-access-88rf9\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760380 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9eef37bd-7913-4e1c-baf0-775f14f6e18a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760407 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/87ce91fe-866c-44e7-8c94-f3d7a994cc75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760523 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-config\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.760528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-config\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.761427 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.761648 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-config\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.761776 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-service-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.761896 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e8bf27-ab59-474a-a502-f214701d5208-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.761958 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f00ebe1b-d72e-48e7-8a75-17f830a53378-proxy-tls\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.762053 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.762051 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7a6edd5-0d0d-431a-9884-af988d7db265-trusted-ca\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.762102 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-config\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.763554 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.763601 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjfn8\" (UniqueName: \"kubernetes.io/projected/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-kube-api-access-xjfn8\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.763055 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-serving-cert\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.763099 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.763677 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.764860 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4923cd-a652-4027-9945-5b20f94b0fff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765031 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-config\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765282 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/87ce91fe-866c-44e7-8c94-f3d7a994cc75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765365 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.773876 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87ce91fe-866c-44e7-8c94-f3d7a994cc75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.773909 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765892 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-ca\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765914 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765521 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-serving-cert\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.766505 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a6edd5-0d0d-431a-9884-af988d7db265-serving-cert\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.766528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.770457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.765564 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.766547 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-etcd-client\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.774468 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a8d4594-ac99-42e7-bfee-f526419ee990-metrics-tls\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.775061 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4923cd-a652-4027-9945-5b20f94b0fff-serving-cert\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.779044 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87ce91fe-866c-44e7-8c94-f3d7a994cc75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.779353 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.784293 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.793231 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.804605 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.824471 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.830250 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.843521 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.863633 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875626 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tjtw\" (UniqueName: \"kubernetes.io/projected/ccea6218-4c8e-45dd-890f-5f9fd1806c99-kube-api-access-6tjtw\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875673 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c49h4\" (UniqueName: \"kubernetes.io/projected/f00ebe1b-d72e-48e7-8a75-17f830a53378-kube-api-access-c49h4\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875737 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eb5f5b-a203-4076-97e2-8854cf7a22bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875761 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6xk\" (UniqueName: \"kubernetes.io/projected/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-kube-api-access-jx6xk\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875791 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3e06d-6012-4f0e-9425-91836b431b5b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875819 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-service-ca-bundle\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875846 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tgc\" (UniqueName: \"kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875873 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875898 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875924 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-key\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875950 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14ecdcf-bbd4-4088-b767-19deee670b4c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rf9\" (UniqueName: \"kubernetes.io/projected/4fd3e06d-6012-4f0e-9425-91836b431b5b-kube-api-access-88rf9\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.875991 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9eef37bd-7913-4e1c-baf0-775f14f6e18a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876041 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e8bf27-ab59-474a-a502-f214701d5208-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876059 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f00ebe1b-d72e-48e7-8a75-17f830a53378-proxy-tls\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876103 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876139 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-stats-auth\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876158 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eb5f5b-a203-4076-97e2-8854cf7a22bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876186 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-srv-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876231 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-profile-collector-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876268 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-images\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876295 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14ecdcf-bbd4-4088-b767-19deee670b4c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876314 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876332 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59h4\" (UniqueName: \"kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876368 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876396 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81eb5f5b-a203-4076-97e2-8854cf7a22bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876415 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7xjl\" (UniqueName: \"kubernetes.io/projected/9eef37bd-7913-4e1c-baf0-775f14f6e18a-kube-api-access-b7xjl\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876429 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876448 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72tt\" (UniqueName: \"kubernetes.io/projected/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-kube-api-access-z72tt\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876467 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-default-certificate\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876490 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2e8bf27-ab59-474a-a502-f214701d5208-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876516 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876542 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876612 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxrq9\" (UniqueName: \"kubernetes.io/projected/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-kube-api-access-qxrq9\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876648 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876671 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3e06d-6012-4f0e-9425-91836b431b5b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876692 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876725 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktlhl\" (UniqueName: \"kubernetes.io/projected/66b44309-5cc3-4f00-bb25-0b0ef360e06e-kube-api-access-ktlhl\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876743 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-cabundle\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876780 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-metrics-certs\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dcr2\" (UniqueName: \"kubernetes.io/projected/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-kube-api-access-9dcr2\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876869 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlpj\" (UniqueName: \"kubernetes.io/projected/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-kube-api-access-rzlpj\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876915 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-proxy-tls\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876961 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.876976 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.877004 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwl4\" (UniqueName: \"kubernetes.io/projected/d14ecdcf-bbd4-4088-b767-19deee670b4c-kube-api-access-lcwl4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.877021 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e8bf27-ab59-474a-a502-f214701d5208-config\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.877042 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6fs2\" (UniqueName: \"kubernetes.io/projected/08700e8f-7c54-4660-a65f-1871db5bbe5e-kube-api-access-f6fs2\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.877186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.884300 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.895434 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.904267 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.924015 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.943075 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.949701 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e8bf27-ab59-474a-a502-f214701d5208-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.964210 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.967731 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e8bf27-ab59-474a-a502-f214701d5208-config\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:19 crc kubenswrapper[4741]: I0226 08:16:19.984043 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.003938 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.023983 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.043701 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.053487 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14ecdcf-bbd4-4088-b767-19deee670b4c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.064149 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.067150 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14ecdcf-bbd4-4088-b767-19deee670b4c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.104001 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.124442 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.144674 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.164559 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.183745 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.204816 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.211098 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eb5f5b-a203-4076-97e2-8854cf7a22bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.224051 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.227712 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eb5f5b-a203-4076-97e2-8854cf7a22bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.244390 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.252060 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9eef37bd-7913-4e1c-baf0-775f14f6e18a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.264804 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.283912 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.303929 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.324047 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.331740 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-default-certificate\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.344701 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.350488 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-stats-auth\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.364996 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.372166 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-metrics-certs\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.383997 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.387256 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-service-ca-bundle\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.404503 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.424447 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.443782 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.453740 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-srv-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.464447 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.472055 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-profile-collector-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.472180 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.472751 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.484819 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.504525 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.524783 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.532186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-proxy-tls\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.544294 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.564354 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.568262 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-cabundle\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.583902 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.603880 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.612405 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66b44309-5cc3-4f00-bb25-0b0ef360e06e-signing-key\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.624580 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.642555 4741 request.go:700] Waited for 1.005319603s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.644462 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.663693 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.702205 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.703716 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.709569 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.724980 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.732584 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.751392 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.762596 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f00ebe1b-d72e-48e7-8a75-17f830a53378-proxy-tls\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.766447 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.785913 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.787869 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f00ebe1b-d72e-48e7-8a75-17f830a53378-images\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.805507 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.819367 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3e06d-6012-4f0e-9425-91836b431b5b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.824146 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.827661 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3e06d-6012-4f0e-9425-91836b431b5b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.845479 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.865492 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876305 4741 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876497 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume podName:b4fc717c-df6a-4ba5-a998-6385257e6f7e nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.376463226 +0000 UTC m=+216.372400633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume") pod "collect-profiles-29534895-b2gkh" (UID: "b4fc717c-df6a-4ba5-a998-6385257e6f7e") : failed to sync configmap cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876313 4741 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876633 4741 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876660 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert podName:00bfb7f6-7024-4431-9fc6-f86f8ff5e363 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.37662593 +0000 UTC m=+216.372563348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-llgbc" (UID: "00bfb7f6-7024-4431-9fc6-f86f8ff5e363") : failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876705 4741 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876743 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert podName:08700e8f-7c54-4660-a65f-1871db5bbe5e nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.376712223 +0000 UTC m=+216.372649650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert") pod "service-ca-operator-777779d784-9v7ng" (UID: "08700e8f-7c54-4660-a65f-1871db5bbe5e") : failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876776 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert podName:ccea6218-4c8e-45dd-890f-5f9fd1806c99 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.376762764 +0000 UTC m=+216.372700181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert") pod "olm-operator-6b444d44fb-66p6t" (UID: "ccea6218-4c8e-45dd-890f-5f9fd1806c99") : failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.876957 4741 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.877019 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs podName:d3676b67-b73e-428d-bcfc-ba5be6f44bb1 nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.377004251 +0000 UTC m=+216.372941878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs") pod "multus-admission-controller-857f4d67dd-5tpv5" (UID: "d3676b67-b73e-428d-bcfc-ba5be6f44bb1") : failed to sync secret cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.877076 4741 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: E0226 08:16:20.877184 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config podName:08700e8f-7c54-4660-a65f-1871db5bbe5e nodeName:}" failed. No retries permitted until 2026-02-26 08:16:21.377168306 +0000 UTC m=+216.373105723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config") pod "service-ca-operator-777779d784-9v7ng" (UID: "08700e8f-7c54-4660-a65f-1871db5bbe5e") : failed to sync configmap cache: timed out waiting for the condition Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.884646 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.904596 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.924818 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.944579 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.964358 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 08:16:20 crc kubenswrapper[4741]: I0226 08:16:20.983781 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.004297 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.024648 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.065377 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.077014 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz9xq\" (UniqueName: \"kubernetes.io/projected/439540a4-1c57-4c48-81ba-842dc3d88804-kube-api-access-gz9xq\") pod \"cluster-samples-operator-665b6dd947-qtjq7\" (UID: \"439540a4-1c57-4c48-81ba-842dc3d88804\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.110863 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfl6l\" (UniqueName: \"kubernetes.io/projected/d33f8d5a-b472-4fb9-9905-e22f985be009-kube-api-access-qfl6l\") pod \"machine-approver-56656f9798-bfq7l\" (UID: \"d33f8d5a-b472-4fb9-9905-e22f985be009\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.123517 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.148872 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d828q\" (UniqueName: \"kubernetes.io/projected/59420a86-a033-4cbe-98bf-3ec780191ed6-kube-api-access-d828q\") pod \"downloads-7954f5f757-ptx5j\" (UID: \"59420a86-a033-4cbe-98bf-3ec780191ed6\") " pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.153762 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.156687 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvn4m\" (UniqueName: \"kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m\") pod \"controller-manager-879f6c89f-r5l64\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.192259 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4r9s\" (UniqueName: \"kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s\") pod \"oauth-openshift-558db77b4-42f2w\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.195528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwnp\" (UniqueName: \"kubernetes.io/projected/5fdadf1f-38a7-41a9-ab52-e750457f3e00-kube-api-access-ghwnp\") pod \"apiserver-76f77b778f-nhrbh\" (UID: \"5fdadf1f-38a7-41a9-ab52-e750457f3e00\") " pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.204046 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.220594 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdfnl\" (UniqueName: \"kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl\") pod \"route-controller-manager-6576b87f9c-7qhb6\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.232496 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" event={"ID":"d33f8d5a-b472-4fb9-9905-e22f985be009","Type":"ContainerStarted","Data":"fc17e1725ccd0ebfbae205b954084d48feccfbb75d8c2172a613c19d488bab08"} Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.237414 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fj7k\" (UniqueName: \"kubernetes.io/projected/8c718e96-1e2d-41e8-beff-d68534e49add-kube-api-access-8fj7k\") pod \"apiserver-7bbb656c7d-tvh2j\" (UID: \"8c718e96-1e2d-41e8-beff-d68534e49add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.243924 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.246627 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dl9p\" (UniqueName: \"kubernetes.io/projected/79be0654-3564-4cd1-87f7-e9eb1c972bbd-kube-api-access-4dl9p\") pod \"machine-api-operator-5694c8668f-llf79\" (UID: \"79be0654-3564-4cd1-87f7-e9eb1c972bbd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.264237 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.275836 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.284026 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.305528 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.322690 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.327052 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.358734 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.365027 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.376327 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.386357 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.405268 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408143 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408186 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408456 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408544 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408641 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408664 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.408794 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08700e8f-7c54-4660-a65f-1871db5bbe5e-config\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.410589 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.415384 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.415822 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.421844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700e8f-7c54-4660-a65f-1871db5bbe5e-serving-cert\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.422375 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ccea6218-4c8e-45dd-890f-5f9fd1806c99-srv-cert\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.428522 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.429835 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.443001 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.445044 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.468936 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.476359 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ptx5j"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.484174 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.496552 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.507648 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.524437 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.547731 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-llf79"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.552594 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.572645 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.585215 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.603830 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.624028 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.638103 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.643734 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.661828 4741 request.go:700] Waited for 1.926846777s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.662026 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nhrbh"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.664057 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.684242 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.704912 4741 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.725093 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: W0226 08:16:21.738278 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fdadf1f_38a7_41a9_ab52_e750457f3e00.slice/crio-b873fdc128ec0ee0d07482b8a11decb21809d98aea99785e249007f8cfb59a2a WatchSource:0}: Error finding container b873fdc128ec0ee0d07482b8a11decb21809d98aea99785e249007f8cfb59a2a: Status 404 returned error can't find the container with id b873fdc128ec0ee0d07482b8a11decb21809d98aea99785e249007f8cfb59a2a Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.740789 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.743510 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.783999 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8qh\" (UniqueName: \"kubernetes.io/projected/4a4923cd-a652-4027-9945-5b20f94b0fff-kube-api-access-pf8qh\") pod \"authentication-operator-69f744f599-hn57j\" (UID: \"4a4923cd-a652-4027-9945-5b20f94b0fff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.809419 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-42f2w"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.815271 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfc89\" (UniqueName: \"kubernetes.io/projected/c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6-kube-api-access-qfc89\") pod \"etcd-operator-b45778765-bh9dr\" (UID: \"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.820828 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htmr\" (UniqueName: \"kubernetes.io/projected/c7a6edd5-0d0d-431a-9884-af988d7db265-kube-api-access-5htmr\") pod \"console-operator-58897d9998-8jb7x\" (UID: \"c7a6edd5-0d0d-431a-9884-af988d7db265\") " pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.826181 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.833698 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.836072 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xgn6m\" (UID: \"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.841963 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.850516 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.856931 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlxgn\" (UniqueName: \"kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn\") pod \"console-f9d7485db-hdqgn\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.880755 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb8vw\" (UniqueName: \"kubernetes.io/projected/1a8d4594-ac99-42e7-bfee-f526419ee990-kube-api-access-kb8vw\") pod \"dns-operator-744455d44c-thvbc\" (UID: \"1a8d4594-ac99-42e7-bfee-f526419ee990\") " pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.897568 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkps9\" (UniqueName: \"kubernetes.io/projected/1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2-kube-api-access-gkps9\") pod \"kube-storage-version-migrator-operator-b67b599dd-75m4m\" (UID: \"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.907607 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.913996 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.917627 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.933562 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.939905 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp84w\" (UniqueName: \"kubernetes.io/projected/87ce91fe-866c-44e7-8c94-f3d7a994cc75-kube-api-access-fp84w\") pod \"cluster-image-registry-operator-dc59b4c8b-xglrq\" (UID: \"87ce91fe-866c-44e7-8c94-f3d7a994cc75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.959528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6wr6\" (UniqueName: \"kubernetes.io/projected/14003e7a-87ba-4ef7-8817-96288f162752-kube-api-access-z6wr6\") pod \"migrator-59844c95c7-xmgrw\" (UID: \"14003e7a-87ba-4ef7-8817-96288f162752\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" Feb 26 08:16:21 crc kubenswrapper[4741]: I0226 08:16:21.983782 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjfn8\" (UniqueName: \"kubernetes.io/projected/1ceb1ab9-9ce4-4a40-9273-727f0499aa21-kube-api-access-xjfn8\") pod \"openshift-config-operator-7777fb866f-9zw77\" (UID: \"1ceb1ab9-9ce4-4a40-9273-727f0499aa21\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.017102 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tjtw\" (UniqueName: \"kubernetes.io/projected/ccea6218-4c8e-45dd-890f-5f9fd1806c99-kube-api-access-6tjtw\") pod \"olm-operator-6b444d44fb-66p6t\" (UID: \"ccea6218-4c8e-45dd-890f-5f9fd1806c99\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.018496 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49h4\" (UniqueName: \"kubernetes.io/projected/f00ebe1b-d72e-48e7-8a75-17f830a53378-kube-api-access-c49h4\") pod \"machine-config-operator-74547568cd-r8ps4\" (UID: \"f00ebe1b-d72e-48e7-8a75-17f830a53378\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.049218 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rf9\" (UniqueName: \"kubernetes.io/projected/4fd3e06d-6012-4f0e-9425-91836b431b5b-kube-api-access-88rf9\") pod \"openshift-apiserver-operator-796bbdcf4f-79j6p\" (UID: \"4fd3e06d-6012-4f0e-9425-91836b431b5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.062643 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tgc\" (UniqueName: \"kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc\") pod \"collect-profiles-29534895-b2gkh\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.075781 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.084045 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59h4\" (UniqueName: \"kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4\") pod \"marketplace-operator-79b997595-5wtbm\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.102141 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81eb5f5b-a203-4076-97e2-8854cf7a22bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2js2k\" (UID: \"81eb5f5b-a203-4076-97e2-8854cf7a22bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.120074 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7xjl\" (UniqueName: \"kubernetes.io/projected/9eef37bd-7913-4e1c-baf0-775f14f6e18a-kube-api-access-b7xjl\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgdrd\" (UID: \"9eef37bd-7913-4e1c-baf0-775f14f6e18a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.120380 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.140611 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2e8bf27-ab59-474a-a502-f214701d5208-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jk2b5\" (UID: \"e2e8bf27-ab59-474a-a502-f214701d5208\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.158788 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.161794 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxrq9\" (UniqueName: \"kubernetes.io/projected/bf09eafa-6397-4fa3-b7f4-c56e66348f9a-kube-api-access-qxrq9\") pod \"router-default-5444994796-nrk4h\" (UID: \"bf09eafa-6397-4fa3-b7f4-c56e66348f9a\") " pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.191222 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72tt\" (UniqueName: \"kubernetes.io/projected/00bfb7f6-7024-4431-9fc6-f86f8ff5e363-kube-api-access-z72tt\") pod \"package-server-manager-789f6589d5-llgbc\" (UID: \"00bfb7f6-7024-4431-9fc6-f86f8ff5e363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.193303 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.199737 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.203933 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktlhl\" (UniqueName: \"kubernetes.io/projected/66b44309-5cc3-4f00-bb25-0b0ef360e06e-kube-api-access-ktlhl\") pod \"service-ca-9c57cc56f-qcpm7\" (UID: \"66b44309-5cc3-4f00-bb25-0b0ef360e06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.219705 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jb7x"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.221011 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dcr2\" (UniqueName: \"kubernetes.io/projected/d3676b67-b73e-428d-bcfc-ba5be6f44bb1-kube-api-access-9dcr2\") pod \"multus-admission-controller-857f4d67dd-5tpv5\" (UID: \"d3676b67-b73e-428d-bcfc-ba5be6f44bb1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.223471 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.247312 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.251674 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" event={"ID":"7b281845-065b-47b4-9bd9-2d45ce79b693","Type":"ContainerStarted","Data":"ebe7540007bf601cc51a768134495e1c1417c4f5c8712f2fa0f5cb60b9ff25d8"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.251843 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.252303 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlpj\" (UniqueName: \"kubernetes.io/projected/e7c8235f-88c7-4d87-b1b5-9514cb07f9cf-kube-api-access-rzlpj\") pod \"catalog-operator-68c6474976-62dzt\" (UID: \"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.253554 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" event={"ID":"98312bcf-f7e9-4868-904a-c27e825ce830","Type":"ContainerStarted","Data":"f500bb91e4348a0c6191d774b72f7a8493f55e0f8d256475c7fb4968e2363c9f"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.259948 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" event={"ID":"d33f8d5a-b472-4fb9-9905-e22f985be009","Type":"ContainerStarted","Data":"51373d8b00f4390a3291e52fc5d21e7b8118f55abcd25c9d26841bbd6702d8ee"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.260002 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" event={"ID":"d33f8d5a-b472-4fb9-9905-e22f985be009","Type":"ContainerStarted","Data":"3d34546db5de88da5c9b7ac9105d5f6ba67bcc32d0a08075fe606963ab550d68"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.260819 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.271252 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.274860 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" event={"ID":"8c718e96-1e2d-41e8-beff-d68534e49add","Type":"ContainerStarted","Data":"7b29150aedcefb604faa695a53873671482de5f72b9a7a57880cd23e3cb53ad6"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.276285 4741 generic.go:334] "Generic (PLEG): container finished" podID="5fdadf1f-38a7-41a9-ab52-e750457f3e00" containerID="a6abf877301caf65bfca523d942a6fbe133e7261fdca2fbe81c5bb312c8d081d" exitCode=0 Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.276348 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" event={"ID":"5fdadf1f-38a7-41a9-ab52-e750457f3e00","Type":"ContainerDied","Data":"a6abf877301caf65bfca523d942a6fbe133e7261fdca2fbe81c5bb312c8d081d"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.276364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" event={"ID":"5fdadf1f-38a7-41a9-ab52-e750457f3e00","Type":"ContainerStarted","Data":"b873fdc128ec0ee0d07482b8a11decb21809d98aea99785e249007f8cfb59a2a"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.277937 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" event={"ID":"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb","Type":"ContainerStarted","Data":"f66b6f11c5d11eafb417922b2855cb746e6d7e0d30e359e9c330f412e5adbf69"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.277959 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" event={"ID":"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb","Type":"ContainerStarted","Data":"b69a4adb3e28639a7d1c455b0cc2003d5762cde64089f8d50f6d32beb6f52a3c"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.278382 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.279735 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwl4\" (UniqueName: \"kubernetes.io/projected/d14ecdcf-bbd4-4088-b767-19deee670b4c-kube-api-access-lcwl4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mwxr2\" (UID: \"d14ecdcf-bbd4-4088-b767-19deee670b4c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.280326 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6fs2\" (UniqueName: \"kubernetes.io/projected/08700e8f-7c54-4660-a65f-1871db5bbe5e-kube-api-access-f6fs2\") pod \"service-ca-operator-777779d784-9v7ng\" (UID: \"08700e8f-7c54-4660-a65f-1871db5bbe5e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.280448 4741 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qhb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.280480 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.284584 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.292606 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.292616 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" event={"ID":"439540a4-1c57-4c48-81ba-842dc3d88804","Type":"ContainerStarted","Data":"745ce3455972689b0d4e51980f69d5084d5078f410eb098b906545a58143934c"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.292684 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" event={"ID":"439540a4-1c57-4c48-81ba-842dc3d88804","Type":"ContainerStarted","Data":"6625eeb95628afebaba0da741fc44e75df0233367da80ee56d326182995c1e69"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.299570 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" event={"ID":"79be0654-3564-4cd1-87f7-e9eb1c972bbd","Type":"ContainerStarted","Data":"424a18f290d543df152d341d13899dd6d1abf88670cc94e9f836376dbfc630a4"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.299612 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" event={"ID":"79be0654-3564-4cd1-87f7-e9eb1c972bbd","Type":"ContainerStarted","Data":"75ca8e7beffbafe73e18562817054a0e769d340c8bd008d12fbc04c0c9b17e1c"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.300281 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.301271 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ptx5j" event={"ID":"59420a86-a033-4cbe-98bf-3ec780191ed6","Type":"ContainerStarted","Data":"e1ee6102831c5fbd2a07f5e57b6185dcebed49a95b1b07ada711a3a7548c1108"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.301323 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ptx5j" event={"ID":"59420a86-a033-4cbe-98bf-3ec780191ed6","Type":"ContainerStarted","Data":"9f9421a01c1d8772f616868edc2437d3491ce088b8a76624829fb59bc75e69ef"} Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.301883 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.302481 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6xk\" (UniqueName: \"kubernetes.io/projected/a31d7247-0e6e-4cff-9d4e-ff7391171f8f-kube-api-access-jx6xk\") pod \"machine-config-controller-84d6567774-6lwxx\" (UID: \"a31d7247-0e6e-4cff-9d4e-ff7391171f8f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.309935 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.319694 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.330530 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.335680 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.335834 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.347655 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.348437 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.358490 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.358934 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hn57j"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.359997 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.363166 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.419220 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.437773 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bh9dr"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449312 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449391 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449431 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-apiservice-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449486 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449529 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca42622a-7a05-4d6d-a432-389fa771e319-tmpfs\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449549 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bh54\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449579 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449628 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449656 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-webhook-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449706 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.449727 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4k6\" (UniqueName: \"kubernetes.io/projected/ca42622a-7a05-4d6d-a432-389fa771e319-kube-api-access-ff4k6\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.452018 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:22.95200244 +0000 UTC m=+217.947939827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.473509 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae79f3d_a7b9_46bc_b46d_bb96c890c0c2.slice/crio-b81a67dd280d7bd1ae1c21e5f8323231fa179770747236d0c9c60e9b1646c750 WatchSource:0}: Error finding container b81a67dd280d7bd1ae1c21e5f8323231fa179770747236d0c9c60e9b1646c750: Status 404 returned error can't find the container with id b81a67dd280d7bd1ae1c21e5f8323231fa179770747236d0c9c60e9b1646c750 Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.500168 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9zw77"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.528824 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.550679 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-thvbc"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.552021 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.552398 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-certs\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.552478 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.052425228 +0000 UTC m=+218.048362735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.552600 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zq4\" (UniqueName: \"kubernetes.io/projected/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-kube-api-access-r5zq4\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.552769 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-registration-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.553411 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-csi-data-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.553515 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.553671 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4k6\" (UniqueName: \"kubernetes.io/projected/ca42622a-7a05-4d6d-a432-389fa771e319-kube-api-access-ff4k6\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.553711 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwf9\" (UniqueName: \"kubernetes.io/projected/d8f7cb7a-591c-46e0-8b16-6946488ee293-kube-api-access-5lwf9\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.553925 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-node-bootstrap-token\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.554152 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jmlc\" (UniqueName: \"kubernetes.io/projected/fa4d59a5-1098-4508-9095-1e2aa97c478b-kube-api-access-7jmlc\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.554189 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-mountpoint-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555036 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555120 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555150 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa4d59a5-1098-4508-9095-1e2aa97c478b-cert\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555188 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rxq\" (UniqueName: \"kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq\") pod \"auto-csr-approver-29534896-rcrbz\" (UID: \"565843a6-5907-4445-9686-cb92b1a56bec\") " pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555325 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.555355 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad96cd59-8bf7-45d4-91fd-473f23f12565-trusted-ca\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.558724 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.560679 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.561309 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-bound-sa-token\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.562691 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-apiservice-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.562752 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pl5j\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-kube-api-access-4pl5j\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.562890 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.563568 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8f7cb7a-591c-46e0-8b16-6946488ee293-config-volume\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.563848 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca42622a-7a05-4d6d-a432-389fa771e319-tmpfs\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.563892 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8f7cb7a-591c-46e0-8b16-6946488ee293-metrics-tls\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.564192 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-plugins-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.566981 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bh54\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.567026 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf5b\" (UniqueName: \"kubernetes.io/projected/1535713f-0ce9-4308-87b9-d19b3197c67a-kube-api-access-rrf5b\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.567187 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.567989 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca42622a-7a05-4d6d-a432-389fa771e319-tmpfs\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.573459 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-socket-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.574492 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.074463302 +0000 UTC m=+218.070400689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.574994 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad96cd59-8bf7-45d4-91fd-473f23f12565-metrics-tls\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.575393 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.575493 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.577390 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.577765 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-webhook-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.581388 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-apiservice-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.583495 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca42622a-7a05-4d6d-a432-389fa771e319-webhook-cert\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.596175 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.622706 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4k6\" (UniqueName: \"kubernetes.io/projected/ca42622a-7a05-4d6d-a432-389fa771e319-kube-api-access-ff4k6\") pod \"packageserver-d55dfcdfc-btgjx\" (UID: \"ca42622a-7a05-4d6d-a432-389fa771e319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.634502 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.646266 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bh54\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.663948 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qcpm7"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680089 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680282 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwf9\" (UniqueName: \"kubernetes.io/projected/d8f7cb7a-591c-46e0-8b16-6946488ee293-kube-api-access-5lwf9\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680321 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-node-bootstrap-token\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680342 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jmlc\" (UniqueName: \"kubernetes.io/projected/fa4d59a5-1098-4508-9095-1e2aa97c478b-kube-api-access-7jmlc\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680362 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-mountpoint-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680384 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa4d59a5-1098-4508-9095-1e2aa97c478b-cert\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680412 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rxq\" (UniqueName: \"kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq\") pod \"auto-csr-approver-29534896-rcrbz\" (UID: \"565843a6-5907-4445-9686-cb92b1a56bec\") " pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680433 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad96cd59-8bf7-45d4-91fd-473f23f12565-trusted-ca\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680454 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-bound-sa-token\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680474 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pl5j\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-kube-api-access-4pl5j\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680489 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8f7cb7a-591c-46e0-8b16-6946488ee293-config-volume\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680509 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8f7cb7a-591c-46e0-8b16-6946488ee293-metrics-tls\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680523 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-plugins-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680541 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrf5b\" (UniqueName: \"kubernetes.io/projected/1535713f-0ce9-4308-87b9-d19b3197c67a-kube-api-access-rrf5b\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680566 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-socket-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680588 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad96cd59-8bf7-45d4-91fd-473f23f12565-metrics-tls\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680643 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-certs\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680661 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zq4\" (UniqueName: \"kubernetes.io/projected/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-kube-api-access-r5zq4\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680679 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-registration-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680717 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-csi-data-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.680854 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-csi-data-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.680927 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.180910503 +0000 UTC m=+218.176847890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.682015 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.682957 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-mountpoint-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.683076 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8f7cb7a-591c-46e0-8b16-6946488ee293-config-volume\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.684468 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-registration-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.684679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad96cd59-8bf7-45d4-91fd-473f23f12565-trusted-ca\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.685217 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-node-bootstrap-token\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.686440 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.686534 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1535713f-0ce9-4308-87b9-d19b3197c67a-certs\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.687185 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-plugins-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.687294 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-socket-dir\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.688614 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa4d59a5-1098-4508-9095-1e2aa97c478b-cert\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.692684 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8f7cb7a-591c-46e0-8b16-6946488ee293-metrics-tls\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.692908 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad96cd59-8bf7-45d4-91fd-473f23f12565-metrics-tls\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.694771 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.696430 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.699468 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw"] Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.702903 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4fc717c_df6a_4ba5_a998_6385257e6f7e.slice/crio-df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e WatchSource:0}: Error finding container df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e: Status 404 returned error can't find the container with id df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.703642 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c5ad140_54c5_4f7b_9e8f_7b1c0970baa6.slice/crio-5ade1b698ba39060064b9929a2f5ad5472b23f0b5fc547a5288b1c05822149ab WatchSource:0}: Error finding container 5ade1b698ba39060064b9929a2f5ad5472b23f0b5fc547a5288b1c05822149ab: Status 404 returned error can't find the container with id 5ade1b698ba39060064b9929a2f5ad5472b23f0b5fc547a5288b1c05822149ab Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.705898 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a4923cd_a652_4027_9945_5b20f94b0fff.slice/crio-b31e2156a812d2aac56342553666081154f4458181d36bde1de8fdae3cf75c94 WatchSource:0}: Error finding container b31e2156a812d2aac56342553666081154f4458181d36bde1de8fdae3cf75c94: Status 404 returned error can't find the container with id b31e2156a812d2aac56342553666081154f4458181d36bde1de8fdae3cf75c94 Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.709730 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt"] Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.713567 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1087876_b61e_42ed_bd63_0ede0e6a09e3.slice/crio-434cf3f8ea59954ee8ad3512f97c42935f5ea4c8319c0e4e57eee6b7231fe98d WatchSource:0}: Error finding container 434cf3f8ea59954ee8ad3512f97c42935f5ea4c8319c0e4e57eee6b7231fe98d: Status 404 returned error can't find the container with id 434cf3f8ea59954ee8ad3512f97c42935f5ea4c8319c0e4e57eee6b7231fe98d Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.715320 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.723667 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwf9\" (UniqueName: \"kubernetes.io/projected/d8f7cb7a-591c-46e0-8b16-6946488ee293-kube-api-access-5lwf9\") pod \"dns-default-85c82\" (UID: \"d8f7cb7a-591c-46e0-8b16-6946488ee293\") " pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.730746 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ceb1ab9_9ce4_4a40_9273_727f0499aa21.slice/crio-f444e4748b1e517b687359bf9ae1bed668ef893ef08a930433d4a16ce2c105cf WatchSource:0}: Error finding container f444e4748b1e517b687359bf9ae1bed668ef893ef08a930433d4a16ce2c105cf: Status 404 returned error can't find the container with id f444e4748b1e517b687359bf9ae1bed668ef893ef08a930433d4a16ce2c105cf Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.731145 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-85c82" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.739226 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jmlc\" (UniqueName: \"kubernetes.io/projected/fa4d59a5-1098-4508-9095-1e2aa97c478b-kube-api-access-7jmlc\") pod \"ingress-canary-2264l\" (UID: \"fa4d59a5-1098-4508-9095-1e2aa97c478b\") " pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.758819 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-bound-sa-token\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.779374 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zq4\" (UniqueName: \"kubernetes.io/projected/e2b04bf6-e1f3-48a6-9277-1a220a59ef82-kube-api-access-r5zq4\") pod \"csi-hostpathplugin-mt4jv\" (UID: \"e2b04bf6-e1f3-48a6-9277-1a220a59ef82\") " pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.782281 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.782629 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.282613118 +0000 UTC m=+218.278550505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.805205 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pl5j\" (UniqueName: \"kubernetes.io/projected/ad96cd59-8bf7-45d4-91fd-473f23f12565-kube-api-access-4pl5j\") pod \"ingress-operator-5b745b69d9-th5gf\" (UID: \"ad96cd59-8bf7-45d4-91fd-473f23f12565\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.815652 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a8d4594_ac99_42e7_bfee_f526419ee990.slice/crio-5fb61b2f6443cd98a5db3e919224a47f6b2e7a1ff8421b744d87dc0b14aad729 WatchSource:0}: Error finding container 5fb61b2f6443cd98a5db3e919224a47f6b2e7a1ff8421b744d87dc0b14aad729: Status 404 returned error can't find the container with id 5fb61b2f6443cd98a5db3e919224a47f6b2e7a1ff8421b744d87dc0b14aad729 Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.816177 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87ce91fe_866c_44e7_8c94_f3d7a994cc75.slice/crio-fe0f30b05c0be4af624e36d02a22e30ad569f4419f9a0d529170eac8e7171552 WatchSource:0}: Error finding container fe0f30b05c0be4af624e36d02a22e30ad569f4419f9a0d529170eac8e7171552: Status 404 returned error can't find the container with id fe0f30b05c0be4af624e36d02a22e30ad569f4419f9a0d529170eac8e7171552 Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.816712 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e8bf27_ab59_474a_a502_f214701d5208.slice/crio-027f85a7be2bf189dc4f34c85897184d1cfab1c86b875433e5c7052585046435 WatchSource:0}: Error finding container 027f85a7be2bf189dc4f34c85897184d1cfab1c86b875433e5c7052585046435: Status 404 returned error can't find the container with id 027f85a7be2bf189dc4f34c85897184d1cfab1c86b875433e5c7052585046435 Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.818680 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b44309_5cc3_4f00_bb25_0b0ef360e06e.slice/crio-3bb87d232fd987842bee7330873489384e3afb7b6b18aefe3a52d043cd5b5a12 WatchSource:0}: Error finding container 3bb87d232fd987842bee7330873489384e3afb7b6b18aefe3a52d043cd5b5a12: Status 404 returned error can't find the container with id 3bb87d232fd987842bee7330873489384e3afb7b6b18aefe3a52d043cd5b5a12 Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.819904 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrf5b\" (UniqueName: \"kubernetes.io/projected/1535713f-0ce9-4308-87b9-d19b3197c67a-kube-api-access-rrf5b\") pod \"machine-config-server-8m2b9\" (UID: \"1535713f-0ce9-4308-87b9-d19b3197c67a\") " pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.821228 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00ebe1b_d72e_48e7_8a75_17f830a53378.slice/crio-afec44f32890cf991c40737ed6e237a133aeaa56bea06256803591d0e04800de WatchSource:0}: Error finding container afec44f32890cf991c40737ed6e237a133aeaa56bea06256803591d0e04800de: Status 404 returned error can't find the container with id afec44f32890cf991c40737ed6e237a133aeaa56bea06256803591d0e04800de Feb 26 08:16:22 crc kubenswrapper[4741]: W0226 08:16:22.828489 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14003e7a_87ba_4ef7_8817_96288f162752.slice/crio-e2b965f88b7527fd36cf9c09fab352e4e280e99b124e20676077a99453f0f3d7 WatchSource:0}: Error finding container e2b965f88b7527fd36cf9c09fab352e4e280e99b124e20676077a99453f0f3d7: Status 404 returned error can't find the container with id e2b965f88b7527fd36cf9c09fab352e4e280e99b124e20676077a99453f0f3d7 Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.862256 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rxq\" (UniqueName: \"kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq\") pod \"auto-csr-approver-29534896-rcrbz\" (UID: \"565843a6-5907-4445-9686-cb92b1a56bec\") " pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.883283 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.883629 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.383612933 +0000 UTC m=+218.379550320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.893261 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd"] Feb 26 08:16:22 crc kubenswrapper[4741]: I0226 08:16:22.985389 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:22 crc kubenswrapper[4741]: E0226 08:16:22.985877 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.485856634 +0000 UTC m=+218.481794021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.006618 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.026690 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.037411 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2264l" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.047335 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8m2b9" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.055623 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.087400 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.087651 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.58760562 +0000 UTC m=+218.583543017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.087806 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.088247 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.588235558 +0000 UTC m=+218.584172945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.091172 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.188833 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.189012 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.688987726 +0000 UTC m=+218.684925113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.189086 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.189424 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.689416699 +0000 UTC m=+218.685354076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: W0226 08:16:23.195711 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81eb5f5b_a203_4076_97e2_8854cf7a22bd.slice/crio-5eb9da433bcbb9449994ff7d65585bd1646ea78b61debc5c04988b68c97dca16 WatchSource:0}: Error finding container 5eb9da433bcbb9449994ff7d65585bd1646ea78b61debc5c04988b68c97dca16: Status 404 returned error can't find the container with id 5eb9da433bcbb9449994ff7d65585bd1646ea78b61debc5c04988b68c97dca16 Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.292940 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.293385 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.793366578 +0000 UTC m=+218.789303965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.313074 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" event={"ID":"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf","Type":"ContainerStarted","Data":"3a6f4e1bef34acf632df820bbcec32c810c29363e069493d9b590a81400c9464"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.315616 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" event={"ID":"e2e8bf27-ab59-474a-a502-f214701d5208","Type":"ContainerStarted","Data":"027f85a7be2bf189dc4f34c85897184d1cfab1c86b875433e5c7052585046435"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.318780 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" event={"ID":"1ceb1ab9-9ce4-4a40-9273-727f0499aa21","Type":"ContainerStarted","Data":"f444e4748b1e517b687359bf9ae1bed668ef893ef08a930433d4a16ce2c105cf"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.325077 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.333500 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdqgn" event={"ID":"a1087876-b61e-42ed-bd63-0ede0e6a09e3","Type":"ContainerStarted","Data":"434cf3f8ea59954ee8ad3512f97c42935f5ea4c8319c0e4e57eee6b7231fe98d"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.337695 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.338633 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" event={"ID":"f00ebe1b-d72e-48e7-8a75-17f830a53378","Type":"ContainerStarted","Data":"afec44f32890cf991c40737ed6e237a133aeaa56bea06256803591d0e04800de"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.340465 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" event={"ID":"14003e7a-87ba-4ef7-8817-96288f162752","Type":"ContainerStarted","Data":"e2b965f88b7527fd36cf9c09fab352e4e280e99b124e20676077a99453f0f3d7"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.341606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" event={"ID":"8c718e96-1e2d-41e8-beff-d68534e49add","Type":"ContainerStarted","Data":"5b1645b8ae6516665a73303284906280ac521a7119565adc6ecf66c55720def5"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.342424 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nrk4h" event={"ID":"bf09eafa-6397-4fa3-b7f4-c56e66348f9a","Type":"ContainerStarted","Data":"c2b1e9f3786fc71f9aee0e4c38b5e5b8b3a9052b2cb7ccf5f684507fbe640e3b"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.343126 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" event={"ID":"b4fc717c-df6a-4ba5-a998-6385257e6f7e","Type":"ContainerStarted","Data":"df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.347469 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" event={"ID":"7b281845-065b-47b4-9bd9-2d45ce79b693","Type":"ContainerStarted","Data":"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.353295 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" event={"ID":"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6","Type":"ContainerStarted","Data":"5ade1b698ba39060064b9929a2f5ad5472b23f0b5fc547a5288b1c05822149ab"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.355995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" event={"ID":"98312bcf-f7e9-4868-904a-c27e825ce830","Type":"ContainerStarted","Data":"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.357097 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" event={"ID":"1a8d4594-ac99-42e7-bfee-f526419ee990","Type":"ContainerStarted","Data":"5fb61b2f6443cd98a5db3e919224a47f6b2e7a1ff8421b744d87dc0b14aad729"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.357869 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" event={"ID":"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6","Type":"ContainerStarted","Data":"8dbafa4756f1773f39f9420b97317ba7cf8fb6610ead74e185532abad2830626"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.358691 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" event={"ID":"4a4923cd-a652-4027-9945-5b20f94b0fff","Type":"ContainerStarted","Data":"b31e2156a812d2aac56342553666081154f4458181d36bde1de8fdae3cf75c94"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.359967 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" event={"ID":"79be0654-3564-4cd1-87f7-e9eb1c972bbd","Type":"ContainerStarted","Data":"d265a786f2a13182068215330fd09d9eb0a3ce9daf7d2a74ad233c1e664785cd"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.360810 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" event={"ID":"c7a6edd5-0d0d-431a-9884-af988d7db265","Type":"ContainerStarted","Data":"c63b0016c1c4e1ad4ada22f04a5d4b793864cf347212097fe36130d40bbbec3a"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.372307 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" event={"ID":"81eb5f5b-a203-4076-97e2-8854cf7a22bd","Type":"ContainerStarted","Data":"5eb9da433bcbb9449994ff7d65585bd1646ea78b61debc5c04988b68c97dca16"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.383485 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" event={"ID":"9eef37bd-7913-4e1c-baf0-775f14f6e18a","Type":"ContainerStarted","Data":"098e50605fb0a6048c3b2c093c5d7d5e848dffecf809bca3d27d1f245a9fa42f"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.385869 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" event={"ID":"66b44309-5cc3-4f00-bb25-0b0ef360e06e","Type":"ContainerStarted","Data":"3bb87d232fd987842bee7330873489384e3afb7b6b18aefe3a52d043cd5b5a12"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.390599 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.390640 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" event={"ID":"87ce91fe-866c-44e7-8c94-f3d7a994cc75","Type":"ContainerStarted","Data":"fe0f30b05c0be4af624e36d02a22e30ad569f4419f9a0d529170eac8e7171552"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.392356 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.394454 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.394919 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.894905299 +0000 UTC m=+218.890842686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.396427 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" event={"ID":"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2","Type":"ContainerStarted","Data":"b81a67dd280d7bd1ae1c21e5f8323231fa179770747236d0c9c60e9b1646c750"} Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.397140 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.397178 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.397202 4741 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qhb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.397239 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.403700 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.413847 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.450787 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.462229 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-85c82"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.485433 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5tpv5"] Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.495936 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.496746 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:23.996721077 +0000 UTC m=+218.992658464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.539141 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podStartSLOduration=164.539093406 podStartE2EDuration="2m44.539093406s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:23.536854911 +0000 UTC m=+218.532792298" watchObservedRunningTime="2026-02-26 08:16:23.539093406 +0000 UTC m=+218.535030793" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.597844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.598474 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.098448283 +0000 UTC m=+219.094385710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.699669 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.699929 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.19989073 +0000 UTC m=+219.195828117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.700084 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.700834 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.200809327 +0000 UTC m=+219.196746774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.782001 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ptx5j" podStartSLOduration=164.781935 podStartE2EDuration="2m44.781935s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:23.779903622 +0000 UTC m=+218.775841069" watchObservedRunningTime="2026-02-26 08:16:23.781935 +0000 UTC m=+218.777872397" Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.801217 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.801475 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.301435571 +0000 UTC m=+219.297372998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.801684 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.802251 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.302229634 +0000 UTC m=+219.298167061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:23 crc kubenswrapper[4741]: I0226 08:16:23.903691 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:23 crc kubenswrapper[4741]: E0226 08:16:23.904821 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.404784003 +0000 UTC m=+219.400721430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.006025 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.006612 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.506582141 +0000 UTC m=+219.502519538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.016479 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bfq7l" podStartSLOduration=165.016459575 podStartE2EDuration="2m45.016459575s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:24.015856248 +0000 UTC m=+219.011793675" watchObservedRunningTime="2026-02-26 08:16:24.016459575 +0000 UTC m=+219.012396962" Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.107791 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.107969 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.607927955 +0000 UTC m=+219.603865372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.108163 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.108596 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.608583394 +0000 UTC m=+219.604520781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.209754 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.210196 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.710172666 +0000 UTC m=+219.706110063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.311338 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.311723 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.811702186 +0000 UTC m=+219.807639573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.407366 4741 generic.go:334] "Generic (PLEG): container finished" podID="8c718e96-1e2d-41e8-beff-d68534e49add" containerID="5b1645b8ae6516665a73303284906280ac521a7119565adc6ecf66c55720def5" exitCode=0 Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.407631 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" event={"ID":"8c718e96-1e2d-41e8-beff-d68534e49add","Type":"ContainerDied","Data":"5b1645b8ae6516665a73303284906280ac521a7119565adc6ecf66c55720def5"} Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.409518 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.409679 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.412657 4741 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qhb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.413013 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.414683 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.413177 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.913145233 +0000 UTC m=+219.909082660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.415062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.415643 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:24.915627945 +0000 UTC m=+219.911565362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.453878 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-llf79" podStartSLOduration=165.453846194 podStartE2EDuration="2m45.453846194s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:24.448918492 +0000 UTC m=+219.444855919" watchObservedRunningTime="2026-02-26 08:16:24.453846194 +0000 UTC m=+219.449783611" Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.516043 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.516371 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.016333431 +0000 UTC m=+220.012270858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.516835 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.517657 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.017619338 +0000 UTC m=+220.013556755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: W0226 08:16:24.521773 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bfb7f6_7024_4431_9fc6_f86f8ff5e363.slice/crio-8e524bd48a34403aac722732ad209889baa60c56106bfba1c3477458a06d1e1f WatchSource:0}: Error finding container 8e524bd48a34403aac722732ad209889baa60c56106bfba1c3477458a06d1e1f: Status 404 returned error can't find the container with id 8e524bd48a34403aac722732ad209889baa60c56106bfba1c3477458a06d1e1f Feb 26 08:16:24 crc kubenswrapper[4741]: W0226 08:16:24.523712 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15f99982_b491_4a49_8fb9_f6355b956e11.slice/crio-a39d05ab2144a9536eb53ee9bd6c161de404f8b6430a0f00a2c9b28bfc43527b WatchSource:0}: Error finding container a39d05ab2144a9536eb53ee9bd6c161de404f8b6430a0f00a2c9b28bfc43527b: Status 404 returned error can't find the container with id a39d05ab2144a9536eb53ee9bd6c161de404f8b6430a0f00a2c9b28bfc43527b Feb 26 08:16:24 crc kubenswrapper[4741]: W0226 08:16:24.527271 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca42622a_7a05_4d6d_a432_389fa771e319.slice/crio-eb11d259f7c635b93f577160bfb39204c099b175c8672cf0844ef4a893126e9c WatchSource:0}: Error finding container eb11d259f7c635b93f577160bfb39204c099b175c8672cf0844ef4a893126e9c: Status 404 returned error can't find the container with id eb11d259f7c635b93f577160bfb39204c099b175c8672cf0844ef4a893126e9c Feb 26 08:16:24 crc kubenswrapper[4741]: W0226 08:16:24.555880 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3676b67_b73e_428d_bcfc_ba5be6f44bb1.slice/crio-4ee7229b3a1d8c07692ace4df4e8286e5f625e06bcfaeffe4edeccac3bd38435 WatchSource:0}: Error finding container 4ee7229b3a1d8c07692ace4df4e8286e5f625e06bcfaeffe4edeccac3bd38435: Status 404 returned error can't find the container with id 4ee7229b3a1d8c07692ace4df4e8286e5f625e06bcfaeffe4edeccac3bd38435 Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.618244 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.618395 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.118364085 +0000 UTC m=+220.114301482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.618886 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.619310 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.119292402 +0000 UTC m=+220.115229789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: W0226 08:16:24.671929 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1535713f_0ce9_4308_87b9_d19b3197c67a.slice/crio-7e2dcb0f679f9df1009b2017fd5fcabd1a188934c238961326278cb48f99a7aa WatchSource:0}: Error finding container 7e2dcb0f679f9df1009b2017fd5fcabd1a188934c238961326278cb48f99a7aa: Status 404 returned error can't find the container with id 7e2dcb0f679f9df1009b2017fd5fcabd1a188934c238961326278cb48f99a7aa Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.720294 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.720877 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.220854763 +0000 UTC m=+220.216792150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.822052 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.822558 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.322538428 +0000 UTC m=+220.318475815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.921248 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2"] Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.923835 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.923987 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.423960985 +0000 UTC m=+220.419898372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.924064 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:24 crc kubenswrapper[4741]: E0226 08:16:24.924414 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.424406258 +0000 UTC m=+220.420343645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.933545 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mt4jv"] Feb 26 08:16:24 crc kubenswrapper[4741]: I0226 08:16:24.957831 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534896-rcrbz"] Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.022052 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf"] Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.025777 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.026002 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.525964768 +0000 UTC m=+220.521902155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.026282 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.026758 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.526743751 +0000 UTC m=+220.522681138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.068163 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2264l"] Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.128014 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.128365 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.628316092 +0000 UTC m=+220.624253509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.128452 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.129023 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.629004582 +0000 UTC m=+220.624941999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.149975 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.150082 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:16:25 crc kubenswrapper[4741]: W0226 08:16:25.163677 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd14ecdcf_bbd4_4088_b767_19deee670b4c.slice/crio-d96ca8fbfc599c8dc055cf835dd766f5333d2d8172857559c6ec0a05a04024d5 WatchSource:0}: Error finding container d96ca8fbfc599c8dc055cf835dd766f5333d2d8172857559c6ec0a05a04024d5: Status 404 returned error can't find the container with id d96ca8fbfc599c8dc055cf835dd766f5333d2d8172857559c6ec0a05a04024d5 Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.191553 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.191628 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.230573 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.230764 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.730731368 +0000 UTC m=+220.726668755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.231473 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.232873 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.732840828 +0000 UTC m=+220.728778455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: W0226 08:16:25.233617 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4d59a5_1098_4508_9095_1e2aa97c478b.slice/crio-97aba33fe0f1e9efdd685a05651a21e48d9768209a442c563671379858cf43f7 WatchSource:0}: Error finding container 97aba33fe0f1e9efdd685a05651a21e48d9768209a442c563671379858cf43f7: Status 404 returned error can't find the container with id 97aba33fe0f1e9efdd685a05651a21e48d9768209a442c563671379858cf43f7 Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.332600 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.333003 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.832983149 +0000 UTC m=+220.828920536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.333774 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.334071 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.83406424 +0000 UTC m=+220.830001627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.416050 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" event={"ID":"a31d7247-0e6e-4cff-9d4e-ff7391171f8f","Type":"ContainerStarted","Data":"486c6a20c19e2ad2c7a02bab8a71e465bb83ced5a4023ef272ea6bdc01518f1e"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.417315 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" event={"ID":"565843a6-5907-4445-9686-cb92b1a56bec","Type":"ContainerStarted","Data":"dae926d27924f18f2c634b94b448ef4bf9c6b167d41e30a282b363af48ecd108"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.419051 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" event={"ID":"4fd3e06d-6012-4f0e-9425-91836b431b5b","Type":"ContainerStarted","Data":"88a6d94b7cf54833c8f99a5bea70362e09206ba6b40d8e21738bec7399534b4b"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.422663 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8m2b9" event={"ID":"1535713f-0ce9-4308-87b9-d19b3197c67a","Type":"ContainerStarted","Data":"7e2dcb0f679f9df1009b2017fd5fcabd1a188934c238961326278cb48f99a7aa"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.430387 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2264l" event={"ID":"fa4d59a5-1098-4508-9095-1e2aa97c478b","Type":"ContainerStarted","Data":"97aba33fe0f1e9efdd685a05651a21e48d9768209a442c563671379858cf43f7"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.431938 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85c82" event={"ID":"d8f7cb7a-591c-46e0-8b16-6946488ee293","Type":"ContainerStarted","Data":"fe1ad11c33e3d110d3c392bd30a0da4d4d72492ccff343b3afa46e99304c5a4b"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.433467 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" event={"ID":"d14ecdcf-bbd4-4088-b767-19deee670b4c","Type":"ContainerStarted","Data":"d96ca8fbfc599c8dc055cf835dd766f5333d2d8172857559c6ec0a05a04024d5"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.434619 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.435026 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:25.934989012 +0000 UTC m=+220.930926589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.436538 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" event={"ID":"5fdadf1f-38a7-41a9-ab52-e750457f3e00","Type":"ContainerStarted","Data":"2d76cb12fd7703830d160de2f9ace212d9375da218dd19e75f0e525d5a462164"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.441520 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" event={"ID":"d3676b67-b73e-428d-bcfc-ba5be6f44bb1","Type":"ContainerStarted","Data":"4ee7229b3a1d8c07692ace4df4e8286e5f625e06bcfaeffe4edeccac3bd38435"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.449899 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" event={"ID":"15f99982-b491-4a49-8fb9-f6355b956e11","Type":"ContainerStarted","Data":"a39d05ab2144a9536eb53ee9bd6c161de404f8b6430a0f00a2c9b28bfc43527b"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.451771 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" event={"ID":"08700e8f-7c54-4660-a65f-1871db5bbe5e","Type":"ContainerStarted","Data":"3f08abcb7e6dec21ace1655a1fbe3ed721f849e439ebdd8acad19f32711d4fe3"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.457756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" event={"ID":"00bfb7f6-7024-4431-9fc6-f86f8ff5e363","Type":"ContainerStarted","Data":"8e524bd48a34403aac722732ad209889baa60c56106bfba1c3477458a06d1e1f"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.459748 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" event={"ID":"ccea6218-4c8e-45dd-890f-5f9fd1806c99","Type":"ContainerStarted","Data":"1d9eaf7de6cc1567b90825059968fc0b32084003f8b0892602004dcaf28c138c"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.467456 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" event={"ID":"e2b04bf6-e1f3-48a6-9277-1a220a59ef82","Type":"ContainerStarted","Data":"080070bfcab2727696fddac63d4155988fa2fff1c2aa75d9f4679a688d37cd28"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.476859 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" event={"ID":"c7a6edd5-0d0d-431a-9884-af988d7db265","Type":"ContainerStarted","Data":"16dcdcf8a310495a8e64ca5866106256dcbf50d2ecc7df3443468ae26f471658"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.495602 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" event={"ID":"ad96cd59-8bf7-45d4-91fd-473f23f12565","Type":"ContainerStarted","Data":"33111a534b3ed34550ef68d3fd74aec2367b5c96a081275d9ec715459d655cb9"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.497224 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" event={"ID":"ca42622a-7a05-4d6d-a432-389fa771e319","Type":"ContainerStarted","Data":"eb11d259f7c635b93f577160bfb39204c099b175c8672cf0844ef4a893126e9c"} Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.498432 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.498452 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.499902 4741 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-42f2w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.500440 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.500244 4741 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5l64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.500517 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.520702 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" podStartSLOduration=166.520676427 podStartE2EDuration="2m46.520676427s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:25.517404143 +0000 UTC m=+220.513341530" watchObservedRunningTime="2026-02-26 08:16:25.520676427 +0000 UTC m=+220.516613814" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.539147 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.539973 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.039948781 +0000 UTC m=+221.035886168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.641897 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.642494 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.142456899 +0000 UTC m=+221.138394276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.642700 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.643072 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.143056667 +0000 UTC m=+221.138994044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.743558 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.743894 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.243849546 +0000 UTC m=+221.239786933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.744186 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.744503 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.244489514 +0000 UTC m=+221.240426901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.823075 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" podStartSLOduration=166.823048914 podStartE2EDuration="2m46.823048914s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:25.560160302 +0000 UTC m=+220.556097699" watchObservedRunningTime="2026-02-26 08:16:25.823048914 +0000 UTC m=+220.818986301" Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.844835 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.845421 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.345397206 +0000 UTC m=+221.341334593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:25 crc kubenswrapper[4741]: I0226 08:16:25.948947 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:25 crc kubenswrapper[4741]: E0226 08:16:25.950469 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.450455888 +0000 UTC m=+221.446393275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.051269 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.051462 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.551430672 +0000 UTC m=+221.547368059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.051852 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.052257 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.552241595 +0000 UTC m=+221.548178982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.152958 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.153296 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.653265851 +0000 UTC m=+221.649203238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.257022 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.258156 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.758136457 +0000 UTC m=+221.754073834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.358931 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.359120 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.85907555 +0000 UTC m=+221.855012937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.359556 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.359897 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.859884433 +0000 UTC m=+221.855821820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.444708 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41104: no serving certificate available for the kubelet" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.460442 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.460916 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:26.960896019 +0000 UTC m=+221.956833406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.516761 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" event={"ID":"ad96cd59-8bf7-45d4-91fd-473f23f12565","Type":"ContainerStarted","Data":"80a861001281f5bd0214702710f6a187ecf00633841189f7d539e01c39cfe71e"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.516844 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" event={"ID":"ad96cd59-8bf7-45d4-91fd-473f23f12565","Type":"ContainerStarted","Data":"4e77ee1e408a75122726651093e2afde858e86b87ee2db57b4be945af08f394c"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.519695 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" event={"ID":"b4fc717c-df6a-4ba5-a998-6385257e6f7e","Type":"ContainerStarted","Data":"0553f6ac15d04eaa843de47fdb335c1973288689586121cf0ac6c5c6c367b9fa"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.522606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" event={"ID":"a31d7247-0e6e-4cff-9d4e-ff7391171f8f","Type":"ContainerStarted","Data":"de5b0fb8f4f0bd488b981643b304d7e2739144cc07532a80cb01655c760edf1a"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.522652 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" event={"ID":"a31d7247-0e6e-4cff-9d4e-ff7391171f8f","Type":"ContainerStarted","Data":"f8345b69100e74a3ff9b5b000de7c3c9ff5b14ed780f0acd30e7c7dda9b0f18e"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.524916 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" event={"ID":"f00ebe1b-d72e-48e7-8a75-17f830a53378","Type":"ContainerStarted","Data":"f315ef2f180b27d36bde1b18dabe416bcde8828616201021e2fd79b6f8ec7c0a"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.524946 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" event={"ID":"f00ebe1b-d72e-48e7-8a75-17f830a53378","Type":"ContainerStarted","Data":"39928622a0df9f1ea51bb7d461c8f171156a1bbadff8ab3594c58e0226d593a1"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.527467 4741 generic.go:334] "Generic (PLEG): container finished" podID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerID="3dedca29969030949000b93bffc4ab08f9a44f39bdf4c7b07c75d014d26336a2" exitCode=0 Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.527548 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" event={"ID":"1ceb1ab9-9ce4-4a40-9273-727f0499aa21","Type":"ContainerDied","Data":"3dedca29969030949000b93bffc4ab08f9a44f39bdf4c7b07c75d014d26336a2"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.527677 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41114: no serving certificate available for the kubelet" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.530218 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8m2b9" event={"ID":"1535713f-0ce9-4308-87b9-d19b3197c67a","Type":"ContainerStarted","Data":"4b8cf0a371de85237dc7c8cb695d11c61dbe567f0db4c726e8aa0be5cc063d4d"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.534771 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85c82" event={"ID":"d8f7cb7a-591c-46e0-8b16-6946488ee293","Type":"ContainerStarted","Data":"18277945ee749fa4ab824d1c751332ccad4eeef23e37ebea1055410f1dd8e2f7"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.534802 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85c82" event={"ID":"d8f7cb7a-591c-46e0-8b16-6946488ee293","Type":"ContainerStarted","Data":"38193b21b876572a04fc1b27c3cd238036716789b0893b7643b3dd5773ef8102"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.534910 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-85c82" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.537339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" event={"ID":"e7c8235f-88c7-4d87-b1b5-9514cb07f9cf","Type":"ContainerStarted","Data":"22993fb422096d959207cce80a4b0748583e4526a77a082c928d4bd0c8626a54"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.537553 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.539098 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" event={"ID":"81eb5f5b-a203-4076-97e2-8854cf7a22bd","Type":"ContainerStarted","Data":"8ceb4bc350b9046cf5e36c6ed73475dd4441fe1aab207e842f439d8563b192e9"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.539678 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.539721 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.542399 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" event={"ID":"5fdadf1f-38a7-41a9-ab52-e750457f3e00","Type":"ContainerStarted","Data":"d31eb8c2ae5a4fda73e627b4576708199f1a138cd839de6e6708b5b31b2126c9"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.543637 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-th5gf" podStartSLOduration=167.543626548 podStartE2EDuration="2m47.543626548s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.542237208 +0000 UTC m=+221.538174585" watchObservedRunningTime="2026-02-26 08:16:26.543626548 +0000 UTC m=+221.539563935" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.546397 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" event={"ID":"d3676b67-b73e-428d-bcfc-ba5be6f44bb1","Type":"ContainerStarted","Data":"9b34599c6752678ba6a873ab070d89a81f4ea0a8c50f84f9a460cf8e8a41e6f1"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.546455 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" event={"ID":"d3676b67-b73e-428d-bcfc-ba5be6f44bb1","Type":"ContainerStarted","Data":"828cf2377a9eac4b8fcb1feee5f2b97f44fed4113dc7ad345f5d987818a2fc81"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.553290 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" event={"ID":"ca42622a-7a05-4d6d-a432-389fa771e319","Type":"ContainerStarted","Data":"1af2167f5c40dba61318acaf57ae99048a35f00a7e6500e5f21befbaf55b7482"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.553870 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.555248 4741 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-btgjx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.555326 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" podUID="ca42622a-7a05-4d6d-a432-389fa771e319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.556879 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" event={"ID":"e2e8bf27-ab59-474a-a502-f214701d5208","Type":"ContainerStarted","Data":"7c176623ec83b5ac4e26d2fa76d6339a2d812252d7e09709c5c556093a599212"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.566597 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.567621 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r8ps4" podStartSLOduration=167.567595557 podStartE2EDuration="2m47.567595557s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.567192476 +0000 UTC m=+221.563129863" watchObservedRunningTime="2026-02-26 08:16:26.567595557 +0000 UTC m=+221.563532944" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.569268 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.069249125 +0000 UTC m=+222.065186712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.582205 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" event={"ID":"08700e8f-7c54-4660-a65f-1871db5bbe5e","Type":"ContainerStarted","Data":"c20acc58f41a1b6f3f3b53096ec5228150a52802270189ff8ce5ee0c8b87ef70"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.596935 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" event={"ID":"c1bd8334-7fd5-4be0-bd83-e486e6b5cfe6","Type":"ContainerStarted","Data":"d2162c41a7d40f53c1d26ecb6d60768333b745a49051a9fe79eddefd29830080"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.598277 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podStartSLOduration=167.598256809 podStartE2EDuration="2m47.598256809s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.597963731 +0000 UTC m=+221.593901128" watchObservedRunningTime="2026-02-26 08:16:26.598256809 +0000 UTC m=+221.594194196" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.599810 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" event={"ID":"d14ecdcf-bbd4-4088-b767-19deee670b4c","Type":"ContainerStarted","Data":"304b511876c8405f8f580d32ff7aabcd983889ae69e0f492b6c47557edfc5183"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.622546 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" event={"ID":"00bfb7f6-7024-4431-9fc6-f86f8ff5e363","Type":"ContainerStarted","Data":"042be96eb239cbbecd0481882dc09fcf53ca0da536308bd711c85493a22e8232"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.622611 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" event={"ID":"00bfb7f6-7024-4431-9fc6-f86f8ff5e363","Type":"ContainerStarted","Data":"48c7e7695dc46259ce790b9608773b3120b7135532b0a05f95e9345a5295c8fd"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.623529 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.632792 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" event={"ID":"1ae79f3d-a7b9-46bc-b46d-bb96c890c0c2","Type":"ContainerStarted","Data":"209976dfba99fc5a988122d2bc75a9086a50f3587d9f677c297f244351bfddff"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.641756 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41126: no serving certificate available for the kubelet" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.643762 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" event={"ID":"9eef37bd-7913-4e1c-baf0-775f14f6e18a","Type":"ContainerStarted","Data":"f6dcdc9097be9b455cde9da0b38b6a2104be6480c51f3bb0bc3b6373587312dc"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.646830 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" event={"ID":"8c718e96-1e2d-41e8-beff-d68534e49add","Type":"ContainerStarted","Data":"c8f19b58211af3482a57bc200a97e4a76ac720a939c206f7f421729f44016ca9"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.651586 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" event={"ID":"1a8d4594-ac99-42e7-bfee-f526419ee990","Type":"ContainerStarted","Data":"960c521da4b09456032c74bc12563f8292693c4812942834edd75c1981896440"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.651695 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" event={"ID":"1a8d4594-ac99-42e7-bfee-f526419ee990","Type":"ContainerStarted","Data":"57703af20638170d992a39fc5f2baa2d9a7ba3b02f0db631317b33fdc964eba5"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.652995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" event={"ID":"4fd3e06d-6012-4f0e-9425-91836b431b5b","Type":"ContainerStarted","Data":"6ee251a521e23494e8cd874c2a25feabe0ccfa74500d78dae7c9ae36b95c8e16"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.657262 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-85c82" podStartSLOduration=7.657241836 podStartE2EDuration="7.657241836s" podCreationTimestamp="2026-02-26 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.630763434 +0000 UTC m=+221.626700841" watchObservedRunningTime="2026-02-26 08:16:26.657241836 +0000 UTC m=+221.653179223" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.657917 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8m2b9" podStartSLOduration=7.657913055 podStartE2EDuration="7.657913055s" podCreationTimestamp="2026-02-26 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.654211679 +0000 UTC m=+221.650149086" watchObservedRunningTime="2026-02-26 08:16:26.657913055 +0000 UTC m=+221.653850442" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.670585 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.672368 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.17234085 +0000 UTC m=+222.168278237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.681675 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2js2k" podStartSLOduration=167.681658738 podStartE2EDuration="2m47.681658738s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.680100993 +0000 UTC m=+221.676038380" watchObservedRunningTime="2026-02-26 08:16:26.681658738 +0000 UTC m=+221.677596115" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.703364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nrk4h" event={"ID":"bf09eafa-6397-4fa3-b7f4-c56e66348f9a","Type":"ContainerStarted","Data":"80e6a0da3676704cecd79528c20266d122881761f0a2e9cc5f0610ee167e9ac1"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.707692 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" event={"ID":"15f99982-b491-4a49-8fb9-f6355b956e11","Type":"ContainerStarted","Data":"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.708775 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.710188 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" event={"ID":"87ce91fe-866c-44e7-8c94-f3d7a994cc75","Type":"ContainerStarted","Data":"21d2f7939118310bd198630240911a02275647586f9ad540f39d0d1e387507e8"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.710602 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wtbm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.710639 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.711831 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdqgn" event={"ID":"a1087876-b61e-42ed-bd63-0ede0e6a09e3","Type":"ContainerStarted","Data":"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.728297 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" event={"ID":"439540a4-1c57-4c48-81ba-842dc3d88804","Type":"ContainerStarted","Data":"59fda3441f2930a002c4e3c7d600e0aa2ccbd061d26387fb712ecbbbda42e5e9"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.755645 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" event={"ID":"4a4923cd-a652-4027-9945-5b20f94b0fff","Type":"ContainerStarted","Data":"4d51495b539050bbdb1df992b20bf5d7c77285328601c00a4336a3ac7d909fb0"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.774921 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.779548 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.279534183 +0000 UTC m=+222.275471570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.784817 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2264l" event={"ID":"fa4d59a5-1098-4508-9095-1e2aa97c478b","Type":"ContainerStarted","Data":"a4ebb7d8041eaafd899652fb3c6c46ee9cbcfea36c82432daef03ce7dfd9e63a"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.807139 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" event={"ID":"ccea6218-4c8e-45dd-890f-5f9fd1806c99","Type":"ContainerStarted","Data":"36a981a05753f0ab0c2148efbe92f42f46c07aee6b381d820882cedda392e281"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.807734 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.811219 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.811268 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.842095 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41128: no serving certificate available for the kubelet" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.849691 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" event={"ID":"9c5ad140-54c5-4f7b-9e8f-7b1c0970baa6","Type":"ContainerStarted","Data":"02d46bebac1016f12c4337c64673d0dde44511a9551cf8ad7d2ae5f365140bbc"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.869194 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" event={"ID":"66b44309-5cc3-4f00-bb25-0b0ef360e06e","Type":"ContainerStarted","Data":"64ee5d3e0d845661b0e0391a3049e86e9e7c20b4b4617fbf026da197f965a3a5"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.872352 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lwxx" podStartSLOduration=167.872335402 podStartE2EDuration="2m47.872335402s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.85626471 +0000 UTC m=+221.852202107" watchObservedRunningTime="2026-02-26 08:16:26.872335402 +0000 UTC m=+221.868272779" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.874865 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" event={"ID":"14003e7a-87ba-4ef7-8817-96288f162752","Type":"ContainerStarted","Data":"a089b07a87d9b8594bd8e4cef1255235247ce1309ec576f359cc815fb3f70bd1"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.874936 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" event={"ID":"14003e7a-87ba-4ef7-8817-96288f162752","Type":"ContainerStarted","Data":"7ebae8dac449f00a96c9f39ead0f0d093f14c3ee5e55889fc294a52504f87807"} Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.876378 4741 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-42f2w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.876426 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.876515 4741 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5l64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.876536 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.876859 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.877757 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.877997 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.377976834 +0000 UTC m=+222.373914221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.878196 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.881301 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.38128551 +0000 UTC m=+222.377222897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.894885 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.896521 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.913649 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" podStartSLOduration=86.913621369 podStartE2EDuration="1m26.913621369s" podCreationTimestamp="2026-02-26 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.907429831 +0000 UTC m=+221.903367208" watchObservedRunningTime="2026-02-26 08:16:26.913621369 +0000 UTC m=+221.909558746" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.948911 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41130: no serving certificate available for the kubelet" Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.984329 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:26 crc kubenswrapper[4741]: I0226 08:16:26.985041 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" podStartSLOduration=167.985018303 podStartE2EDuration="2m47.985018303s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:26.98353538 +0000 UTC m=+221.979472767" watchObservedRunningTime="2026-02-26 08:16:26.985018303 +0000 UTC m=+221.980955690" Feb 26 08:16:26 crc kubenswrapper[4741]: E0226 08:16:26.985467 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.485446685 +0000 UTC m=+222.481384072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.009235 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" podStartSLOduration=168.009213859 podStartE2EDuration="2m48.009213859s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.007575242 +0000 UTC m=+222.003512629" watchObservedRunningTime="2026-02-26 08:16:27.009213859 +0000 UTC m=+222.005151236" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.024893 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgdrd" podStartSLOduration=168.024870679 podStartE2EDuration="2m48.024870679s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.024375155 +0000 UTC m=+222.020312542" watchObservedRunningTime="2026-02-26 08:16:27.024870679 +0000 UTC m=+222.020808066" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.043844 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-qcpm7" podStartSLOduration=168.043824564 podStartE2EDuration="2m48.043824564s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.041848767 +0000 UTC m=+222.037786154" watchObservedRunningTime="2026-02-26 08:16:27.043824564 +0000 UTC m=+222.039761951" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.062411 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" podStartSLOduration=168.062387168 podStartE2EDuration="2m48.062387168s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.060427132 +0000 UTC m=+222.056364519" watchObservedRunningTime="2026-02-26 08:16:27.062387168 +0000 UTC m=+222.058324555" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.065760 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41142: no serving certificate available for the kubelet" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.086734 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.087417 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.587396647 +0000 UTC m=+222.583334034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.095732 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bh9dr" podStartSLOduration=168.095710137 podStartE2EDuration="2m48.095710137s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.090809116 +0000 UTC m=+222.086746503" watchObservedRunningTime="2026-02-26 08:16:27.095710137 +0000 UTC m=+222.091647524" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.114895 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" podStartSLOduration=168.114868618 podStartE2EDuration="2m48.114868618s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.112193821 +0000 UTC m=+222.108131208" watchObservedRunningTime="2026-02-26 08:16:27.114868618 +0000 UTC m=+222.110806005" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.138445 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtjq7" podStartSLOduration=168.138417885 podStartE2EDuration="2m48.138417885s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.137737845 +0000 UTC m=+222.133675232" watchObservedRunningTime="2026-02-26 08:16:27.138417885 +0000 UTC m=+222.134355272" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.184451 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-thvbc" podStartSLOduration=168.184427318 podStartE2EDuration="2m48.184427318s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.153367395 +0000 UTC m=+222.149304802" watchObservedRunningTime="2026-02-26 08:16:27.184427318 +0000 UTC m=+222.180364705" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.184585 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hdqgn" podStartSLOduration=168.184579543 podStartE2EDuration="2m48.184579543s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.18274261 +0000 UTC m=+222.178679997" watchObservedRunningTime="2026-02-26 08:16:27.184579543 +0000 UTC m=+222.180516930" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.188853 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.189276 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.689256467 +0000 UTC m=+222.685193844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.205421 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xgn6m" podStartSLOduration=168.205401901 podStartE2EDuration="2m48.205401901s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.202986532 +0000 UTC m=+222.198923929" watchObservedRunningTime="2026-02-26 08:16:27.205401901 +0000 UTC m=+222.201339288" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.230878 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mwxr2" podStartSLOduration=168.230858044 podStartE2EDuration="2m48.230858044s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.228074233 +0000 UTC m=+222.224011620" watchObservedRunningTime="2026-02-26 08:16:27.230858044 +0000 UTC m=+222.226795431" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.261325 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.263386 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.263430 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.283000 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" podStartSLOduration=168.282975693 podStartE2EDuration="2m48.282975693s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.278932656 +0000 UTC m=+222.274870063" watchObservedRunningTime="2026-02-26 08:16:27.282975693 +0000 UTC m=+222.278913090" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.285052 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41156: no serving certificate available for the kubelet" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.290350 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.290701 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.790688804 +0000 UTC m=+222.786626191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.303619 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podStartSLOduration=168.303595576 podStartE2EDuration="2m48.303595576s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.302425662 +0000 UTC m=+222.298363049" watchObservedRunningTime="2026-02-26 08:16:27.303595576 +0000 UTC m=+222.299532963" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.324861 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xmgrw" podStartSLOduration=168.324838236 podStartE2EDuration="2m48.324838236s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.322023515 +0000 UTC m=+222.317960912" watchObservedRunningTime="2026-02-26 08:16:27.324838236 +0000 UTC m=+222.320775623" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.345454 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-75m4m" podStartSLOduration=168.345238513 podStartE2EDuration="2m48.345238513s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.338746917 +0000 UTC m=+222.334684304" watchObservedRunningTime="2026-02-26 08:16:27.345238513 +0000 UTC m=+222.341175900" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.365893 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2264l" podStartSLOduration=8.365868106 podStartE2EDuration="8.365868106s" podCreationTimestamp="2026-02-26 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.361065378 +0000 UTC m=+222.357002765" watchObservedRunningTime="2026-02-26 08:16:27.365868106 +0000 UTC m=+222.361805493" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.389784 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xglrq" podStartSLOduration=168.389756734 podStartE2EDuration="2m48.389756734s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.389510407 +0000 UTC m=+222.385447804" watchObservedRunningTime="2026-02-26 08:16:27.389756734 +0000 UTC m=+222.385694121" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.391622 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.392087 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.89207088 +0000 UTC m=+222.888008257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.451257 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5tpv5" podStartSLOduration=168.451227742 podStartE2EDuration="2m48.451227742s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.446280909 +0000 UTC m=+222.442218306" watchObservedRunningTime="2026-02-26 08:16:27.451227742 +0000 UTC m=+222.447165129" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.462514 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" podStartSLOduration=168.462488725 podStartE2EDuration="2m48.462488725s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.410298364 +0000 UTC m=+222.406235751" watchObservedRunningTime="2026-02-26 08:16:27.462488725 +0000 UTC m=+222.458426102" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.482191 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jk2b5" podStartSLOduration=168.482166131 podStartE2EDuration="2m48.482166131s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.48002222 +0000 UTC m=+222.475959607" watchObservedRunningTime="2026-02-26 08:16:27.482166131 +0000 UTC m=+222.478103518" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.493729 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.494308 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:27.99428049 +0000 UTC m=+222.990218067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.535470 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9v7ng" podStartSLOduration=168.535449664 podStartE2EDuration="2m48.535449664s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.513124352 +0000 UTC m=+222.509061769" watchObservedRunningTime="2026-02-26 08:16:27.535449664 +0000 UTC m=+222.531387051" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.536367 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nrk4h" podStartSLOduration=168.53636314 podStartE2EDuration="2m48.53636314s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.534696872 +0000 UTC m=+222.530634269" watchObservedRunningTime="2026-02-26 08:16:27.53636314 +0000 UTC m=+222.532300527" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.569599 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-79j6p" podStartSLOduration=168.569568515 podStartE2EDuration="2m48.569568515s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.569062361 +0000 UTC m=+222.564999748" watchObservedRunningTime="2026-02-26 08:16:27.569568515 +0000 UTC m=+222.565505902" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.599769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.600428 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.100407972 +0000 UTC m=+223.096345359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.644611 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41162: no serving certificate available for the kubelet" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.702337 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.702879 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.202857648 +0000 UTC m=+223.198795035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.803865 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.804305 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.304283615 +0000 UTC m=+223.300221002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885591 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885620 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885647 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885679 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885808 4741 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-btgjx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.885871 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" podUID="ca42622a-7a05-4d6d-a432-389fa771e319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.884311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" event={"ID":"1ceb1ab9-9ce4-4a40-9273-727f0499aa21","Type":"ContainerStarted","Data":"e9e2234cad6411c7b9f2f9889538ac6627a912452bccb81c2dcecb2b1c86ea22"} Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.886556 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.886600 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wtbm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.886613 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.886650 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.903904 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podStartSLOduration=168.903881199 podStartE2EDuration="2m48.903881199s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.600726711 +0000 UTC m=+222.596664118" watchObservedRunningTime="2026-02-26 08:16:27.903881199 +0000 UTC m=+222.899818586" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.905628 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podStartSLOduration=168.905618939 podStartE2EDuration="2m48.905618939s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:27.902282453 +0000 UTC m=+222.898219840" watchObservedRunningTime="2026-02-26 08:16:27.905618939 +0000 UTC m=+222.901556326" Feb 26 08:16:27 crc kubenswrapper[4741]: I0226 08:16:27.905664 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:27 crc kubenswrapper[4741]: E0226 08:16:27.906393 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.406373311 +0000 UTC m=+223.402310698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.006845 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.009426 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.509406164 +0000 UTC m=+223.505343541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.110215 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.110634 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.610620585 +0000 UTC m=+223.606557972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.160347 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.211630 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.211905 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.711858687 +0000 UTC m=+223.707796084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.212300 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.212692 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.712674891 +0000 UTC m=+223.708612278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.268200 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:28 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:28 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:28 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.268293 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.313518 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.313753 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.813718727 +0000 UTC m=+223.809656104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.313901 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.314337 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.814321144 +0000 UTC m=+223.810258531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.409869 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41166: no serving certificate available for the kubelet" Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.415339 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.415574 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.915534995 +0000 UTC m=+223.911472382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.415680 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.416080 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:28.91606937 +0000 UTC m=+223.912006757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.517158 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.517389 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.017353433 +0000 UTC m=+224.013290820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.517526 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.517949 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.01794225 +0000 UTC m=+224.013879627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.618600 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.618867 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.118829232 +0000 UTC m=+224.114766619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.619082 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.619468 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.11945231 +0000 UTC m=+224.115389687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.720142 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.720378 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.220344022 +0000 UTC m=+224.216281409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.720450 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.720858 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.220849596 +0000 UTC m=+224.216786983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.821314 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.821575 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.321534352 +0000 UTC m=+224.317471739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.821665 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.822044 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.322028146 +0000 UTC m=+224.317965533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.905862 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" event={"ID":"e2b04bf6-e1f3-48a6-9277-1a220a59ef82","Type":"ContainerStarted","Data":"6166bd04901d576c47350b129ff4b863e0a2e3d344430a2fe387e5437f6a2230"} Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.908809 4741 generic.go:334] "Generic (PLEG): container finished" podID="b4fc717c-df6a-4ba5-a998-6385257e6f7e" containerID="0553f6ac15d04eaa843de47fdb335c1973288689586121cf0ac6c5c6c367b9fa" exitCode=0 Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.910227 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" event={"ID":"b4fc717c-df6a-4ba5-a998-6385257e6f7e","Type":"ContainerDied","Data":"0553f6ac15d04eaa843de47fdb335c1973288689586121cf0ac6c5c6c367b9fa"} Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.911256 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wtbm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.911322 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.923128 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.923280 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.423245937 +0000 UTC m=+224.419183324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:28 crc kubenswrapper[4741]: I0226 08:16:28.923443 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:28 crc kubenswrapper[4741]: E0226 08:16:28.923821 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.423806003 +0000 UTC m=+224.419743390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.024609 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.024856 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.524818719 +0000 UTC m=+224.520756106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.025730 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.026475 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.526455956 +0000 UTC m=+224.522393523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.128265 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.128377 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.628358397 +0000 UTC m=+224.624295784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.128728 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.129048 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.629041456 +0000 UTC m=+224.624978843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.230646 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.231087 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.731069941 +0000 UTC m=+224.727007328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.265255 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:29 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:29 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:29 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.265321 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.332647 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.333141 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.833120096 +0000 UTC m=+224.829057483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.434027 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.434306 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.934260195 +0000 UTC m=+224.930197592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.434485 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.434967 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:29.934954225 +0000 UTC m=+224.930891612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.535622 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.535878 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.035839846 +0000 UTC m=+225.031777233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.536737 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.537152 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.037135493 +0000 UTC m=+225.033072880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.638031 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.638536 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.138505569 +0000 UTC m=+225.134442956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.739348 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.739914 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.239888245 +0000 UTC m=+225.235825822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.771652 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41170: no serving certificate available for the kubelet" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.800479 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.801974 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.806828 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.822962 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.840261 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.840498 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.340455827 +0000 UTC m=+225.336393214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.940049 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" event={"ID":"e2b04bf6-e1f3-48a6-9277-1a220a59ef82","Type":"ContainerStarted","Data":"c5d3ca441cc81bb63872e7cb5171336d57bc7d0b79537fc7889ee3d0244db46a"} Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.942102 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.942201 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.942285 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.942412 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzxz\" (UniqueName: \"kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:29 crc kubenswrapper[4741]: E0226 08:16:29.942810 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.44278694 +0000 UTC m=+225.438724327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.964972 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.976023 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:29 crc kubenswrapper[4741]: I0226 08:16:29.985582 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.016262 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046088 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046424 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046460 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046504 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tsmp\" (UniqueName: \"kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046588 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046624 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.046645 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzxz\" (UniqueName: \"kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.047614 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.547595685 +0000 UTC m=+225.543533072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.047975 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.048211 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.084876 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzxz\" (UniqueName: \"kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz\") pod \"community-operators-mwjzl\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.120531 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.151057 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.151142 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.151204 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tsmp\" (UniqueName: \"kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.151229 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.151613 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.651597166 +0000 UTC m=+225.647534553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.151751 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.152092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.158826 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.246733 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.248174 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.252750 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.253067 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tsmp\" (UniqueName: \"kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp\") pod \"certified-operators-rfttb\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.253363 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.753339612 +0000 UTC m=+225.749276999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.257062 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.278470 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:30 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:30 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:30 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.278534 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.330840 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.354273 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.354347 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.354388 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4f8\" (UniqueName: \"kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.354432 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.354773 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.854760509 +0000 UTC m=+225.850697896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.360257 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.361613 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.398135 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.426988 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.455683 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.456210 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.956173006 +0000 UTC m=+225.952110393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.456559 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") pod \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.456651 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tgc\" (UniqueName: \"kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc\") pod \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.458811 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume\") pod \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\" (UID: \"b4fc717c-df6a-4ba5-a998-6385257e6f7e\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.460596 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.460732 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.460844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4f8\" (UniqueName: \"kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.460938 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqd7\" (UniqueName: \"kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.461033 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.461164 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.461293 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.458743 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4fc717c-df6a-4ba5-a998-6385257e6f7e" (UID: "b4fc717c-df6a-4ba5-a998-6385257e6f7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.462806 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.463265 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:30.963250139 +0000 UTC m=+225.959187526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.463646 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.465636 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc" (OuterVolumeSpecName: "kube-api-access-r2tgc") pod "b4fc717c-df6a-4ba5-a998-6385257e6f7e" (UID: "b4fc717c-df6a-4ba5-a998-6385257e6f7e"). InnerVolumeSpecName "kube-api-access-r2tgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.466000 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4fc717c-df6a-4ba5-a998-6385257e6f7e" (UID: "b4fc717c-df6a-4ba5-a998-6385257e6f7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.492620 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4f8\" (UniqueName: \"kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8\") pod \"community-operators-vrg9r\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.562665 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.563449 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.563508 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqd7\" (UniqueName: \"kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.563581 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.563901 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.564054 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.064033058 +0000 UTC m=+226.059970445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.564317 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.564432 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4fc717c-df6a-4ba5-a998-6385257e6f7e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.564451 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fc717c-df6a-4ba5-a998-6385257e6f7e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.564463 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tgc\" (UniqueName: \"kubernetes.io/projected/b4fc717c-df6a-4ba5-a998-6385257e6f7e-kube-api-access-r2tgc\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.596184 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.609145 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqd7\" (UniqueName: \"kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7\") pod \"certified-operators-lmj66\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.672699 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.673591 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.173576899 +0000 UTC m=+226.169514286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.693946 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.780410 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.782798 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fc717c-df6a-4ba5-a998-6385257e6f7e" containerName="collect-profiles" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.782814 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fc717c-df6a-4ba5-a998-6385257e6f7e" containerName="collect-profiles" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.782937 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fc717c-df6a-4ba5-a998-6385257e6f7e" containerName="collect-profiles" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.783276 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.783443 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.783946 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.283927472 +0000 UTC m=+226.279864859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.786049 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.786371 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.800650 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.886403 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.887299 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.887402 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.887457 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:30 crc kubenswrapper[4741]: E0226 08:16:30.887779 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.387758369 +0000 UTC m=+226.383695756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.893073 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" containerID="cri-o://6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43" gracePeriod=30 Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.922138 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.942806 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.947483 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" containerID="cri-o://f66b6f11c5d11eafb417922b2855cb746e6d7e0d30e359e9c330f412e5adbf69" gracePeriod=30 Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.955374 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:30 crc kubenswrapper[4741]: I0226 08:16:30.977792 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" event={"ID":"e2b04bf6-e1f3-48a6-9277-1a220a59ef82","Type":"ContainerStarted","Data":"85a70ea7a6088fda1924939aeb0418c43bd4bd292aa8dadbde022963f416d89c"} Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:30.999900 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.499879804 +0000 UTC m=+226.495817191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.002602 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.003476 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.003598 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.003832 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.005311 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.505289259 +0000 UTC m=+226.501226636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.005916 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.009502 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.012639 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.012956 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh" event={"ID":"b4fc717c-df6a-4ba5-a998-6385257e6f7e","Type":"ContainerDied","Data":"df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e"} Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.013061 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3474ac4a3000808851018902302900d7372bcfdb4023043e8958fa461c532e" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.050710 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.071054 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.108191 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.108729 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.608693553 +0000 UTC m=+226.604630940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.140661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.147728 4741 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.155798 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.155840 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.155882 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.155897 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:31 crc kubenswrapper[4741]: W0226 08:16:31.190707 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod326d4c0d_4365_4ae3_b9b3_8abf324c80e4.slice/crio-e9802ca3ab40bfc76bc9181ef09beddf252d81f3628597164ef1e10e412f9f99 WatchSource:0}: Error finding container e9802ca3ab40bfc76bc9181ef09beddf252d81f3628597164ef1e10e412f9f99: Status 404 returned error can't find the container with id e9802ca3ab40bfc76bc9181ef09beddf252d81f3628597164ef1e10e412f9f99 Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.210019 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.210449 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.710434709 +0000 UTC m=+226.706372096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.245194 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.265829 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:31 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:31 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:31 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.265906 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:31 crc kubenswrapper[4741]: W0226 08:16:31.266355 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4617c569_c733_49d0_8d5f_01a69cb53e73.slice/crio-4750e59940f6220645785fff7a8d45248375cb1bb5537eb514c584689a3d8ad7 WatchSource:0}: Error finding container 4750e59940f6220645785fff7a8d45248375cb1bb5537eb514c584689a3d8ad7: Status 404 returned error can't find the container with id 4750e59940f6220645785fff7a8d45248375cb1bb5537eb514c584689a3d8ad7 Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.294755 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.311834 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.312289 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.812255907 +0000 UTC m=+226.808193294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.312404 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.312962 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.812945137 +0000 UTC m=+226.808882524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: W0226 08:16:31.316898 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf04e22_1cdf_45ca_9a69_110a53166ff6.slice/crio-79954f83fe06cb21986ad6cae864dde51b73880df21677501221426266f4874b WatchSource:0}: Error finding container 79954f83fe06cb21986ad6cae864dde51b73880df21677501221426266f4874b: Status 404 returned error can't find the container with id 79954f83fe06cb21986ad6cae864dde51b73880df21677501221426266f4874b Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.323640 4741 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qhb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.324017 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.360384 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.360449 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.369058 4741 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nhrbh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]log ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]etcd ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/max-in-flight-filter ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 08:16:31 crc kubenswrapper[4741]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 08:16:31 crc kubenswrapper[4741]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/project.openshift.io-projectcache ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-startinformers ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 08:16:31 crc kubenswrapper[4741]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 08:16:31 crc kubenswrapper[4741]: livez check failed Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.369145 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" podUID="5fdadf1f-38a7-41a9-ab52-e750457f3e00" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.378289 4741 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5l64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.378347 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.414055 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.414419 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.914328982 +0000 UTC m=+226.910266369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.414779 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.415551 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:31.915529427 +0000 UTC m=+226.911466814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.432084 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.432158 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.436048 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.438899 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.459462 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.517667 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.520332 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.02029167 +0000 UTC m=+227.016229187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.621985 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.622786 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.122770797 +0000 UTC m=+227.118708184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.723651 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.724216 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.224194705 +0000 UTC m=+227.220132102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.743160 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.744518 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.746732 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.752034 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.825330 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.825401 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.825584 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.825643 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fnx\" (UniqueName: \"kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.826027 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.326003473 +0000 UTC m=+227.321940850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.856453 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.907957 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.908391 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.909578 4741 patch_prober.go:28] interesting pod/console-f9d7485db-hdqgn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.909646 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hdqgn" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.928040 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.928809 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.928892 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.929082 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.429044466 +0000 UTC m=+227.424981853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.929351 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.929473 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8fnx\" (UniqueName: \"kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: E0226 08:16:31.930890 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.430881059 +0000 UTC m=+227.426818446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.931006 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.931081 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.947988 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:31 crc kubenswrapper[4741]: I0226 08:16:31.960014 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8fnx\" (UniqueName: \"kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx\") pod \"redhat-marketplace-bfvsg\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.021949 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" event={"ID":"e2b04bf6-e1f3-48a6-9277-1a220a59ef82","Type":"ContainerStarted","Data":"be91b19ef665e16f916ebe52a1a09dd05e5f9c9f82da682a2b9a7b8750ccc319"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.024645 4741 generic.go:334] "Generic (PLEG): container finished" podID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerID="8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.024728 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerDied","Data":"8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.024758 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerStarted","Data":"6d6879f41b74ebfa46b6cc954494a8f6132196abde1a3e994409a806c9fbe9c5"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030287 4741 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T08:16:31.147753407Z","Handler":null,"Name":""} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030621 4741 generic.go:334] "Generic (PLEG): container finished" podID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerID="6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030814 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" event={"ID":"7b281845-065b-47b4-9bd9-2d45ce79b693","Type":"ContainerDied","Data":"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030827 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030855 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5l64" event={"ID":"7b281845-065b-47b4-9bd9-2d45ce79b693","Type":"ContainerDied","Data":"ebe7540007bf601cc51a768134495e1c1417c4f5c8712f2fa0f5cb60b9ff25d8"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030854 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config\") pod \"7b281845-065b-47b4-9bd9-2d45ce79b693\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.030878 4741 scope.go:117] "RemoveContainer" containerID="6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031171 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031200 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca\") pod \"7b281845-065b-47b4-9bd9-2d45ce79b693\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031274 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvn4m\" (UniqueName: \"kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m\") pod \"7b281845-065b-47b4-9bd9-2d45ce79b693\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031295 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert\") pod \"7b281845-065b-47b4-9bd9-2d45ce79b693\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031327 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles\") pod \"7b281845-065b-47b4-9bd9-2d45ce79b693\" (UID: \"7b281845-065b-47b4-9bd9-2d45ce79b693\") " Feb 26 08:16:32 crc kubenswrapper[4741]: E0226 08:16:32.031359 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.531336938 +0000 UTC m=+227.527274335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.031684 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.032190 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b281845-065b-47b4-9bd9-2d45ce79b693" (UID: "7b281845-065b-47b4-9bd9-2d45ce79b693"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.032328 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config" (OuterVolumeSpecName: "config") pod "7b281845-065b-47b4-9bd9-2d45ce79b693" (UID: "7b281845-065b-47b4-9bd9-2d45ce79b693"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.033013 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7b281845-065b-47b4-9bd9-2d45ce79b693" (UID: "7b281845-065b-47b4-9bd9-2d45ce79b693"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: E0226 08:16:32.033153 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 08:16:32.533097769 +0000 UTC m=+227.529035156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bcnnc" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.033218 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3","Type":"ContainerStarted","Data":"0982ac344ecbdb674637d3d3c5858daa2818b711e2038bc6d420a3840bf0b1f1"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.036530 4741 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.036585 4741 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.036941 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m" (OuterVolumeSpecName: "kube-api-access-nvn4m") pod "7b281845-065b-47b4-9bd9-2d45ce79b693" (UID: "7b281845-065b-47b4-9bd9-2d45ce79b693"). InnerVolumeSpecName "kube-api-access-nvn4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.038171 4741 generic.go:334] "Generic (PLEG): container finished" podID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerID="a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.038270 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerDied","Data":"a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.038315 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerStarted","Data":"e9802ca3ab40bfc76bc9181ef09beddf252d81f3628597164ef1e10e412f9f99"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.041192 4741 generic.go:334] "Generic (PLEG): container finished" podID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerID="8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.041234 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerDied","Data":"8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.041251 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerStarted","Data":"4750e59940f6220645785fff7a8d45248375cb1bb5537eb514c584689a3d8ad7"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.047348 4741 generic.go:334] "Generic (PLEG): container finished" podID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerID="f66b6f11c5d11eafb417922b2855cb746e6d7e0d30e359e9c330f412e5adbf69" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.047444 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" event={"ID":"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb","Type":"ContainerDied","Data":"f66b6f11c5d11eafb417922b2855cb746e6d7e0d30e359e9c330f412e5adbf69"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.051672 4741 generic.go:334] "Generic (PLEG): container finished" podID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerID="81769fb405869c84282ff8515aa900c48310c7e106c02e608d95084cf0c572af" exitCode=0 Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.051751 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerDied","Data":"81769fb405869c84282ff8515aa900c48310c7e106c02e608d95084cf0c572af"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.051923 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerStarted","Data":"79954f83fe06cb21986ad6cae864dde51b73880df21677501221426266f4874b"} Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.052331 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b281845-065b-47b4-9bd9-2d45ce79b693" (UID: "7b281845-065b-47b4-9bd9-2d45ce79b693"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.053103 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" podStartSLOduration=13.053087064 podStartE2EDuration="13.053087064s" podCreationTimestamp="2026-02-26 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:32.049300465 +0000 UTC m=+227.045237872" watchObservedRunningTime="2026-02-26 08:16:32.053087064 +0000 UTC m=+227.049024461" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.060399 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.063911 4741 scope.go:117] "RemoveContainer" containerID="6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43" Feb 26 08:16:32 crc kubenswrapper[4741]: E0226 08:16:32.064727 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43\": container with ID starting with 6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43 not found: ID does not exist" containerID="6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.064761 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43"} err="failed to get container status \"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43\": rpc error: code = NotFound desc = could not find container \"6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43\": container with ID starting with 6b570eaac056a87d07ea01e101158d5d29213f710268e0e5a730fe9d74d21b43 not found: ID does not exist" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.087881 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.113268 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 08:16:32 crc kubenswrapper[4741]: E0226 08:16:32.113650 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.113717 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.113905 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" containerName="controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.114377 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.121572 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.121790 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133022 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133674 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133697 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133708 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvn4m\" (UniqueName: \"kubernetes.io/projected/7b281845-065b-47b4-9bd9-2d45ce79b693-kube-api-access-nvn4m\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133718 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b281845-065b-47b4-9bd9-2d45ce79b693-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.133729 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b281845-065b-47b4-9bd9-2d45ce79b693-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.141795 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.144179 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.173797 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.176285 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.179174 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235380 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235523 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js74v\" (UniqueName: \"kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235574 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235594 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235774 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.235805 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.242413 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.242995 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.261225 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.265241 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:32 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:32 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:32 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.265306 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.282626 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.286031 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bcnnc\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.317460 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.344082 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js74v\" (UniqueName: \"kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.344193 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.344236 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.344341 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.344367 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.347207 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.347632 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.348411 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.353631 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.389712 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js74v\" (UniqueName: \"kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v\") pod \"redhat-marketplace-jrtcx\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.394043 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.412821 4741 ???:1] "http: TLS handshake error from 192.168.126.11:41184: no serving certificate available for the kubelet" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.421176 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.428495 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.437559 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5l64"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.437934 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.437946 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.508220 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.656337 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.656818 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.658007 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.675931 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.699581 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.727797 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.759945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.760003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.760044 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.765536 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2840647-3181-4a32-9386-b7f030bb9356-metrics-certs\") pod \"network-metrics-daemon-zlfsg\" (UID: \"f2840647-3181-4a32-9386-b7f030bb9356\") " pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.766474 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.769065 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.787759 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.811131 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:16:32 crc kubenswrapper[4741]: E0226 08:16:32.811762 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.811777 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.811904 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" containerName="route-controller-manager" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.812442 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.812702 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zlfsg" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.818271 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.818716 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.818873 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.819018 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.819262 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.819399 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.830818 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.838684 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.860749 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config\") pod \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.860832 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca\") pod \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.860906 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdfnl\" (UniqueName: \"kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl\") pod \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861004 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert\") pod \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\" (UID: \"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb\") " Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861176 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861226 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861275 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861300 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79h4\" (UniqueName: \"kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.861317 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.865521 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" (UID: "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.865595 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config" (OuterVolumeSpecName: "config") pod "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" (UID: "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.873621 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl" (OuterVolumeSpecName: "kube-api-access-gdfnl") pod "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" (UID: "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb"). InnerVolumeSpecName "kube-api-access-gdfnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.874584 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" (UID: "4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.883566 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:16:32 crc kubenswrapper[4741]: W0226 08:16:32.897427 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee4578f5_1608_403f_9132_5613a1b3a105.slice/crio-f94cf85e3fd8032392a27f80d855ca410562aa5b133514849454f957cc185ebe WatchSource:0}: Error finding container f94cf85e3fd8032392a27f80d855ca410562aa5b133514849454f957cc185ebe: Status 404 returned error can't find the container with id f94cf85e3fd8032392a27f80d855ca410562aa5b133514849454f957cc185ebe Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.945521 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963183 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963289 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963327 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j79h4\" (UniqueName: \"kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963353 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963391 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963461 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963479 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdfnl\" (UniqueName: \"kubernetes.io/projected/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-kube-api-access-gdfnl\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963494 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.963508 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.965666 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.966544 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.967151 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.972976 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:32 crc kubenswrapper[4741]: I0226 08:16:32.981521 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j79h4\" (UniqueName: \"kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4\") pod \"controller-manager-957f8cdbd-gv55s\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.005028 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.012418 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:33 crc kubenswrapper[4741]: W0226 08:16:33.032350 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd7f585bc_9e1e_43c7_a566_8e0b04678067.slice/crio-e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d WatchSource:0}: Error finding container e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d: Status 404 returned error can't find the container with id e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.037905 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.072194 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3","Type":"ContainerStarted","Data":"90936f905d2ebefcfb4f034a7ecf878b57a2cc302a42c6ffc76cf0745d8a7d39"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.075414 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" event={"ID":"ee4578f5-1608-403f-9132-5613a1b3a105","Type":"ContainerStarted","Data":"f94cf85e3fd8032392a27f80d855ca410562aa5b133514849454f957cc185ebe"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.088883 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerStarted","Data":"4ecfe0f4a1f42c48253988246ec5563f6a2d3b344097f883e59f4650438b2bf7"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.091262 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.091238442 podStartE2EDuration="3.091238442s" podCreationTimestamp="2026-02-26 08:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:33.088211975 +0000 UTC m=+228.084149372" watchObservedRunningTime="2026-02-26 08:16:33.091238442 +0000 UTC m=+228.087175819" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.094160 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" event={"ID":"4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb","Type":"ContainerDied","Data":"b69a4adb3e28639a7d1c455b0cc2003d5762cde64089f8d50f6d32beb6f52a3c"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.094255 4741 scope.go:117] "RemoveContainer" containerID="f66b6f11c5d11eafb417922b2855cb746e6d7e0d30e359e9c330f412e5adbf69" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.094195 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.096012 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d7f585bc-9e1e-43c7-a566-8e0b04678067","Type":"ContainerStarted","Data":"e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.096577 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zlfsg"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.097219 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerStarted","Data":"34715091292a6bfc5e94a8e78ee62bfd289d837d1a892b1063f3dde6cb0b9ec5"} Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.158160 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.196466 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.202883 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qhb6"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.266539 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:33 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:33 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:33 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.266636 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.350234 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.361190 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.363102 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.368426 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.484820 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.484907 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.485160 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5slrm\" (UniqueName: \"kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.587533 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.587607 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.587645 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5slrm\" (UniqueName: \"kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.588920 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.589056 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.611041 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5slrm\" (UniqueName: \"kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm\") pod \"redhat-operators-kss68\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.648090 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:16:33 crc kubenswrapper[4741]: W0226 08:16:33.672036 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fd9e68_9a24_4aca_b42b_7f4abaebf1f3.slice/crio-16c47366a65eabbd327ddc024efa57c4ebfab14b1dd3504e961814f28b654432 WatchSource:0}: Error finding container 16c47366a65eabbd327ddc024efa57c4ebfab14b1dd3504e961814f28b654432: Status 404 returned error can't find the container with id 16c47366a65eabbd327ddc024efa57c4ebfab14b1dd3504e961814f28b654432 Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.690476 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.751033 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.752506 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.764517 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.791909 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvj9\" (UniqueName: \"kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.792503 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.792532 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.808274 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb" path="/var/lib/kubelet/pods/4a20ea7d-6219-48eb-9665-8f9dd5f2b4cb/volumes" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.809436 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b281845-065b-47b4-9bd9-2d45ce79b693" path="/var/lib/kubelet/pods/7b281845-065b-47b4-9bd9-2d45ce79b693/volumes" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.810446 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.894307 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvj9\" (UniqueName: \"kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.894388 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.894436 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.895773 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.896076 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.918356 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvj9\" (UniqueName: \"kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9\") pod \"redhat-operators-j9v25\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:33 crc kubenswrapper[4741]: I0226 08:16:33.994752 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:16:34 crc kubenswrapper[4741]: W0226 08:16:34.039238 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9461f6e_32f2_46cd_b0be_71ae66fdb20e.slice/crio-be599bc485a953ece114b46b759f43a399b20f004d3f76438dafd1500b7c2a49 WatchSource:0}: Error finding container be599bc485a953ece114b46b759f43a399b20f004d3f76438dafd1500b7c2a49: Status 404 returned error can't find the container with id be599bc485a953ece114b46b759f43a399b20f004d3f76438dafd1500b7c2a49 Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.055281 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.136369 4741 generic.go:334] "Generic (PLEG): container finished" podID="ba490082-d248-4d24-86ea-a812f638c6f7" containerID="8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07" exitCode=0 Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.136593 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerDied","Data":"8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.152803 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" event={"ID":"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3","Type":"ContainerStarted","Data":"16c47366a65eabbd327ddc024efa57c4ebfab14b1dd3504e961814f28b654432"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.160222 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a781278b30ba87dee31dfdbbf6c1e44c08a5e16af4614734b4f222f785434f6e"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.160268 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f9eb529b595d5706945db20ae4c506a1286392daa7cd61b7ebe056738254efd9"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.171582 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerStarted","Data":"be599bc485a953ece114b46b759f43a399b20f004d3f76438dafd1500b7c2a49"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.174482 4741 generic.go:334] "Generic (PLEG): container finished" podID="b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" containerID="90936f905d2ebefcfb4f034a7ecf878b57a2cc302a42c6ffc76cf0745d8a7d39" exitCode=0 Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.174593 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3","Type":"ContainerDied","Data":"90936f905d2ebefcfb4f034a7ecf878b57a2cc302a42c6ffc76cf0745d8a7d39"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.190040 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" event={"ID":"f2840647-3181-4a32-9386-b7f030bb9356","Type":"ContainerStarted","Data":"18b50a679598347a3039217573f3dc5a9e344fb2ee3300026a28da31909a8311"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.212358 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d7f585bc-9e1e-43c7-a566-8e0b04678067","Type":"ContainerStarted","Data":"99a3307f8039e79b31c2c1a02ec07cdecfe4107d0fb868369a4faef482ffc150"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.215138 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c70efe8d7ec3266d40c57472a69f7f006e802bdf245c9a97de79caf9b8c8d237"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.215174 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"60a422c9be79f1a15c152494bdea2de7af10979491782312b92bf8c4654f9cf5"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.215389 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.218040 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"246b368622c65c6f911374f5c01a8fc24cfd71784df52b197d158f8cf037b1f7"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.218070 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5d0f1f69c8765f066da1d1feb16cbb843057b776ed97527dfe0e5c5a98884a96"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.232539 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.232512906 podStartE2EDuration="2.232512906s" podCreationTimestamp="2026-02-26 08:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:34.230782187 +0000 UTC m=+229.226719574" watchObservedRunningTime="2026-02-26 08:16:34.232512906 +0000 UTC m=+229.228450293" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.242434 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" event={"ID":"ee4578f5-1608-403f-9132-5613a1b3a105","Type":"ContainerStarted","Data":"ea2448748a704a0a54d6bd5f21098d4549edf690e27449cf060f9c2dd3f0f6aa"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.242505 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.249265 4741 generic.go:334] "Generic (PLEG): container finished" podID="f427872d-44a6-465f-b06a-8289364bab66" containerID="41ebb1d7e1e1bfb928fa8211d58963521472ecd8e059bc121faff63e458ee78d" exitCode=0 Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.249323 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerDied","Data":"41ebb1d7e1e1bfb928fa8211d58963521472ecd8e059bc121faff63e458ee78d"} Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.279799 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:34 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:34 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:34 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.279865 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.368244 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" podStartSLOduration=175.368218319 podStartE2EDuration="2m55.368218319s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:34.340481922 +0000 UTC m=+229.336419319" watchObservedRunningTime="2026-02-26 08:16:34.368218319 +0000 UTC m=+229.364155706" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.581242 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:16:34 crc kubenswrapper[4741]: W0226 08:16:34.622752 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0ad83e_7982_442d_93a3_8ba01a4e8ec3.slice/crio-9bde366975824b1a1b82316c6f3c376a930f65fffd40da6c2179f591e5baaab5 WatchSource:0}: Error finding container 9bde366975824b1a1b82316c6f3c376a930f65fffd40da6c2179f591e5baaab5: Status 404 returned error can't find the container with id 9bde366975824b1a1b82316c6f3c376a930f65fffd40da6c2179f591e5baaab5 Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.850612 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.877515 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.877847 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.910292 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.910950 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.911143 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.912030 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.912825 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.921532 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.934496 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.934587 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5m5h\" (UniqueName: \"kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.934741 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:34 crc kubenswrapper[4741]: I0226 08:16:34.934784 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.036582 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.036672 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.036721 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.036744 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5m5h\" (UniqueName: \"kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.037529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.037977 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.046921 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.064030 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5m5h\" (UniqueName: \"kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h\") pod \"route-controller-manager-679c9d9ccb-6l8ng\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.259194 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.279245 4741 generic.go:334] "Generic (PLEG): container finished" podID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerID="63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233" exitCode=0 Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.279320 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerDied","Data":"63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.279351 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerStarted","Data":"9bde366975824b1a1b82316c6f3c376a930f65fffd40da6c2179f591e5baaab5"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.279882 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:35 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:35 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:35 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.280013 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.294977 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerID="c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30" exitCode=0 Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.295081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerDied","Data":"c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.333030 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" event={"ID":"f2840647-3181-4a32-9386-b7f030bb9356","Type":"ContainerStarted","Data":"dd5993da72c405d2e2234e6c9dbed8b045500d26a266d534eede8a55f8b5a0f3"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.333081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zlfsg" event={"ID":"f2840647-3181-4a32-9386-b7f030bb9356","Type":"ContainerStarted","Data":"977ac8e92b12cc4ccfb623a0bfb6c1079c2a87519169882cf1461f1dfddd6ead"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.354191 4741 generic.go:334] "Generic (PLEG): container finished" podID="d7f585bc-9e1e-43c7-a566-8e0b04678067" containerID="99a3307f8039e79b31c2c1a02ec07cdecfe4107d0fb868369a4faef482ffc150" exitCode=0 Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.354408 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d7f585bc-9e1e-43c7-a566-8e0b04678067","Type":"ContainerDied","Data":"99a3307f8039e79b31c2c1a02ec07cdecfe4107d0fb868369a4faef482ffc150"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.357463 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" event={"ID":"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3","Type":"ContainerStarted","Data":"51a51b9e256a5d360b7379be7c33ae3cdef0acd22bc251b2acdbe66d733a4263"} Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.357494 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.372962 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.379246 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zlfsg" podStartSLOduration=176.379217466 podStartE2EDuration="2m56.379217466s" podCreationTimestamp="2026-02-26 08:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:35.370733782 +0000 UTC m=+230.366671169" watchObservedRunningTime="2026-02-26 08:16:35.379217466 +0000 UTC m=+230.375154843" Feb 26 08:16:35 crc kubenswrapper[4741]: I0226 08:16:35.414778 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" podStartSLOduration=4.414736897 podStartE2EDuration="4.414736897s" podCreationTimestamp="2026-02-26 08:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:16:35.409229469 +0000 UTC m=+230.405166866" watchObservedRunningTime="2026-02-26 08:16:35.414736897 +0000 UTC m=+230.410674284" Feb 26 08:16:36 crc kubenswrapper[4741]: I0226 08:16:36.265779 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:36 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:36 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:36 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:36 crc kubenswrapper[4741]: I0226 08:16:36.265913 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:36 crc kubenswrapper[4741]: I0226 08:16:36.365687 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:36 crc kubenswrapper[4741]: I0226 08:16:36.372256 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nhrbh" Feb 26 08:16:37 crc kubenswrapper[4741]: I0226 08:16:37.264777 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 08:16:37 crc kubenswrapper[4741]: [-]has-synced failed: reason withheld Feb 26 08:16:37 crc kubenswrapper[4741]: [+]process-running ok Feb 26 08:16:37 crc kubenswrapper[4741]: healthz check failed Feb 26 08:16:37 crc kubenswrapper[4741]: I0226 08:16:37.265305 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:16:37 crc kubenswrapper[4741]: I0226 08:16:37.561853 4741 ???:1] "http: TLS handshake error from 192.168.126.11:43786: no serving certificate available for the kubelet" Feb 26 08:16:37 crc kubenswrapper[4741]: I0226 08:16:37.749587 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-85c82" Feb 26 08:16:37 crc kubenswrapper[4741]: I0226 08:16:37.996526 4741 ???:1] "http: TLS handshake error from 192.168.126.11:43788: no serving certificate available for the kubelet" Feb 26 08:16:38 crc kubenswrapper[4741]: I0226 08:16:38.264514 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:38 crc kubenswrapper[4741]: I0226 08:16:38.267803 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nrk4h" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.155763 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.155833 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.155854 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.155907 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.340208 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.420102 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3","Type":"ContainerDied","Data":"0982ac344ecbdb674637d3d3c5858daa2818b711e2038bc6d420a3840bf0b1f1"} Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.420163 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0982ac344ecbdb674637d3d3c5858daa2818b711e2038bc6d420a3840bf0b1f1" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.420206 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.488187 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access\") pod \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.488235 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir\") pod \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\" (UID: \"b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3\") " Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.488477 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" (UID: "b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.507667 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" (UID: "b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.589789 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.589826 4741 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.915760 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:41 crc kubenswrapper[4741]: I0226 08:16:41.919517 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:16:47 crc kubenswrapper[4741]: I0226 08:16:47.828633 4741 ???:1] "http: TLS handshake error from 192.168.126.11:40424: no serving certificate available for the kubelet" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.175773 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.324203 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access\") pod \"d7f585bc-9e1e-43c7-a566-8e0b04678067\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.324289 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir\") pod \"d7f585bc-9e1e-43c7-a566-8e0b04678067\" (UID: \"d7f585bc-9e1e-43c7-a566-8e0b04678067\") " Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.324641 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7f585bc-9e1e-43c7-a566-8e0b04678067" (UID: "d7f585bc-9e1e-43c7-a566-8e0b04678067"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.332478 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7f585bc-9e1e-43c7-a566-8e0b04678067" (UID: "d7f585bc-9e1e-43c7-a566-8e0b04678067"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.426661 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f585bc-9e1e-43c7-a566-8e0b04678067-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.426702 4741 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7f585bc-9e1e-43c7-a566-8e0b04678067-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.475417 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d7f585bc-9e1e-43c7-a566-8e0b04678067","Type":"ContainerDied","Data":"e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d"} Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.475475 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e037eb0927c4fb6c564aca818d00b02f159e8f806e67ae6d42ddee6e9fc44d" Feb 26 08:16:48 crc kubenswrapper[4741]: I0226 08:16:48.475547 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 08:16:50 crc kubenswrapper[4741]: I0226 08:16:50.172024 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:16:50 crc kubenswrapper[4741]: I0226 08:16:50.172827 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerName="controller-manager" containerID="cri-o://51a51b9e256a5d360b7379be7c33ae3cdef0acd22bc251b2acdbe66d733a4263" gracePeriod=30 Feb 26 08:16:50 crc kubenswrapper[4741]: I0226 08:16:50.187674 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:16:51 crc kubenswrapper[4741]: I0226 08:16:51.176727 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ptx5j" Feb 26 08:16:51 crc kubenswrapper[4741]: I0226 08:16:51.500561 4741 generic.go:334] "Generic (PLEG): container finished" podID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerID="51a51b9e256a5d360b7379be7c33ae3cdef0acd22bc251b2acdbe66d733a4263" exitCode=0 Feb 26 08:16:51 crc kubenswrapper[4741]: I0226 08:16:51.500604 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" event={"ID":"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3","Type":"ContainerDied","Data":"51a51b9e256a5d360b7379be7c33ae3cdef0acd22bc251b2acdbe66d733a4263"} Feb 26 08:16:52 crc kubenswrapper[4741]: I0226 08:16:52.445473 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:16:53 crc kubenswrapper[4741]: I0226 08:16:53.159796 4741 patch_prober.go:28] interesting pod/controller-manager-957f8cdbd-gv55s container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Feb 26 08:16:53 crc kubenswrapper[4741]: I0226 08:16:53.160350 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Feb 26 08:16:53 crc kubenswrapper[4741]: E0226 08:16:53.531462 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 08:16:53 crc kubenswrapper[4741]: E0226 08:16:53.532098 4741 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 08:16:53 crc kubenswrapper[4741]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 08:16:53 crc kubenswrapper[4741]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d4rxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29534896-rcrbz_openshift-infra(565843a6-5907-4445-9686-cb92b1a56bec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 08:16:53 crc kubenswrapper[4741]: > logger="UnhandledError" Feb 26 08:16:53 crc kubenswrapper[4741]: E0226 08:16:53.534260 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" podUID="565843a6-5907-4445-9686-cb92b1a56bec" Feb 26 08:16:54 crc kubenswrapper[4741]: E0226 08:16:54.532969 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" podUID="565843a6-5907-4445-9686-cb92b1a56bec" Feb 26 08:16:55 crc kubenswrapper[4741]: I0226 08:16:55.149498 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:16:55 crc kubenswrapper[4741]: I0226 08:16:55.149584 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.754899 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.805889 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:16:59 crc kubenswrapper[4741]: E0226 08:16:59.806202 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerName="controller-manager" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806219 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerName="controller-manager" Feb 26 08:16:59 crc kubenswrapper[4741]: E0226 08:16:59.806238 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f585bc-9e1e-43c7-a566-8e0b04678067" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806247 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f585bc-9e1e-43c7-a566-8e0b04678067" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: E0226 08:16:59.806262 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806269 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806455 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" containerName="controller-manager" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806469 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c0b1d2-52a0-4d0c-9672-eb86f8ce5ca3" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.806477 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f585bc-9e1e-43c7-a566-8e0b04678067" containerName="pruner" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.807003 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.830817 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.932732 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca\") pod \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.932781 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles\") pod \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.932832 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert\") pod \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.932874 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j79h4\" (UniqueName: \"kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4\") pod \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.932939 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config\") pod \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\" (UID: \"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3\") " Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.933176 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.933210 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.933235 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5l4n\" (UniqueName: \"kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.933260 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.933294 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.934194 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" (UID: "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.934150 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" (UID: "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.934623 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config" (OuterVolumeSpecName: "config") pod "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" (UID: "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.941509 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" (UID: "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.942878 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4" (OuterVolumeSpecName: "kube-api-access-j79h4") pod "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" (UID: "b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3"). InnerVolumeSpecName "kube-api-access-j79h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:16:59 crc kubenswrapper[4741]: I0226 08:16:59.965347 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034094 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034197 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034225 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5l4n\" (UniqueName: \"kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034247 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034270 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034334 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034346 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034358 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034367 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j79h4\" (UniqueName: \"kubernetes.io/projected/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-kube-api-access-j79h4\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.034376 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.035612 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.035935 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.036128 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.048437 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.062079 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5l4n\" (UniqueName: \"kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n\") pod \"controller-manager-586cb89fb6-q9s46\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.131373 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.605944 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" event={"ID":"b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3","Type":"ContainerDied","Data":"16c47366a65eabbd327ddc024efa57c4ebfab14b1dd3504e961814f28b654432"} Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.606623 4741 scope.go:117] "RemoveContainer" containerID="51a51b9e256a5d360b7379be7c33ae3cdef0acd22bc251b2acdbe66d733a4263" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.607102 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-957f8cdbd-gv55s" Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.645503 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:17:00 crc kubenswrapper[4741]: I0226 08:17:00.650204 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-957f8cdbd-gv55s"] Feb 26 08:17:01 crc kubenswrapper[4741]: I0226 08:17:01.794779 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3" path="/var/lib/kubelet/pods/b3fd9e68-9a24-4aca-b42b-7f4abaebf1f3/volumes" Feb 26 08:17:02 crc kubenswrapper[4741]: I0226 08:17:02.357191 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" Feb 26 08:17:03 crc kubenswrapper[4741]: I0226 08:17:03.347433 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 08:17:04 crc kubenswrapper[4741]: E0226 08:17:04.644198 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 08:17:04 crc kubenswrapper[4741]: E0226 08:17:04.644954 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thzxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mwjzl_openshift-marketplace(326d4c0d-4365-4ae3-b9b3-8abf324c80e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:04 crc kubenswrapper[4741]: E0226 08:17:04.646188 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mwjzl" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.113016 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.114563 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.118992 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.119703 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.119941 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.254952 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.255057 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.356216 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.356315 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.356327 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.374636 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:06 crc kubenswrapper[4741]: I0226 08:17:06.442420 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:08 crc kubenswrapper[4741]: E0226 08:17:08.487070 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 08:17:08 crc kubenswrapper[4741]: E0226 08:17:08.490453 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5slrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kss68_openshift-marketplace(d9461f6e-32f2-46cd-b0be-71ae66fdb20e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:08 crc kubenswrapper[4741]: E0226 08:17:08.491694 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kss68" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.171068 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.506851 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.508234 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.512175 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.525027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.525416 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.525506 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: W0226 08:17:10.568599 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf840543_6ba0_437e_8b2d_aa7089201072.slice/crio-fdb506e90a7a20b2a136301c4b3cd8aef3703b496c3a6f266211efd6f07c0c22 WatchSource:0}: Error finding container fdb506e90a7a20b2a136301c4b3cd8aef3703b496c3a6f266211efd6f07c0c22: Status 404 returned error can't find the container with id fdb506e90a7a20b2a136301c4b3cd8aef3703b496c3a6f266211efd6f07c0c22 Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.606918 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.607325 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lmj66_openshift-marketplace(dbf04e22-1cdf-45ca-9a69-110a53166ff6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.608765 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lmj66" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.609731 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.609850 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tsmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rfttb_openshift-marketplace(769f8af2-a3e7-4d89-a15d-a81b50d12bc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.610956 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rfttb" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.626288 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.626373 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.626415 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.626430 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.626521 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.637267 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.637456 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7j4f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vrg9r_openshift-marketplace(4617c569-c733-49d0-8d5f-01a69cb53e73): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:10 crc kubenswrapper[4741]: E0226 08:17:10.638667 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vrg9r" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.645796 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.677824 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" event={"ID":"cf840543-6ba0-437e-8b2d-aa7089201072","Type":"ContainerStarted","Data":"fdb506e90a7a20b2a136301c4b3cd8aef3703b496c3a6f266211efd6f07c0c22"} Feb 26 08:17:10 crc kubenswrapper[4741]: I0226 08:17:10.872355 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:11 crc kubenswrapper[4741]: E0226 08:17:11.982205 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kss68" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" Feb 26 08:17:11 crc kubenswrapper[4741]: E0226 08:17:11.982916 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vrg9r" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" Feb 26 08:17:11 crc kubenswrapper[4741]: E0226 08:17:11.983770 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfttb" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.002862 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.003026 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8fnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bfvsg_openshift-marketplace(ba490082-d248-4d24-86ea-a812f638c6f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.004342 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bfvsg" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.028923 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.029145 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js74v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jrtcx_openshift-marketplace(f427872d-44a6-465f-b06a-8289364bab66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.030669 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jrtcx" podUID="f427872d-44a6-465f-b06a-8289364bab66" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.039464 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.039646 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxvj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j9v25_openshift-marketplace(3f0ad83e-7982-442d-93a3-8ba01a4e8ec3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.042838 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j9v25" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.453394 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.455331 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 08:17:12 crc kubenswrapper[4741]: W0226 08:17:12.462749 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3b39b2de_d5c9_4651_a2de_cb816a67180f.slice/crio-67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af WatchSource:0}: Error finding container 67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af: Status 404 returned error can't find the container with id 67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af Feb 26 08:17:12 crc kubenswrapper[4741]: W0226 08:17:12.468097 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod888ba1ec_7933_4127_a867_b5b1d7423f54.slice/crio-4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef WatchSource:0}: Error finding container 4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef: Status 404 returned error can't find the container with id 4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.540188 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:17:12 crc kubenswrapper[4741]: W0226 08:17:12.549904 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f44549_0735_41a4_ac23_2b6352c69b41.slice/crio-b09faa8686c6b61ba28a820d10934c9bd6976b51cc319c386ed2a50eb79790e9 WatchSource:0}: Error finding container b09faa8686c6b61ba28a820d10934c9bd6976b51cc319c386ed2a50eb79790e9: Status 404 returned error can't find the container with id b09faa8686c6b61ba28a820d10934c9bd6976b51cc319c386ed2a50eb79790e9 Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.692574 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" event={"ID":"cf840543-6ba0-437e-8b2d-aa7089201072","Type":"ContainerStarted","Data":"a5afc5ebcc1dde8dea862be4f933d516c0565c11c14dbed0f228f6ea74769a7f"} Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.692739 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" containerName="route-controller-manager" containerID="cri-o://a5afc5ebcc1dde8dea862be4f933d516c0565c11c14dbed0f228f6ea74769a7f" gracePeriod=30 Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.693328 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.698115 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"888ba1ec-7933-4127-a867-b5b1d7423f54","Type":"ContainerStarted","Data":"4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef"} Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.700830 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b39b2de-d5c9-4651-a2de-cb816a67180f","Type":"ContainerStarted","Data":"67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af"} Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.702402 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" event={"ID":"70f44549-0735-41a4-ac23-2b6352c69b41","Type":"ContainerStarted","Data":"b09faa8686c6b61ba28a820d10934c9bd6976b51cc319c386ed2a50eb79790e9"} Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.707090 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" event={"ID":"565843a6-5907-4445-9686-cb92b1a56bec","Type":"ContainerStarted","Data":"98d61524639d9639c79360f0c5b009b3778fd5d02444d8b6ff49415abea59481"} Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.710778 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bfvsg" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.710836 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrtcx" podUID="f427872d-44a6-465f-b06a-8289364bab66" Feb 26 08:17:12 crc kubenswrapper[4741]: E0226 08:17:12.710860 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j9v25" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.718419 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" podStartSLOduration=41.718403434 podStartE2EDuration="41.718403434s" podCreationTimestamp="2026-02-26 08:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:12.716277412 +0000 UTC m=+267.712214809" watchObservedRunningTime="2026-02-26 08:17:12.718403434 +0000 UTC m=+267.714340821" Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.765295 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" podStartSLOduration=25.837958179 podStartE2EDuration="1m12.765272738s" podCreationTimestamp="2026-02-26 08:16:00 +0000 UTC" firstStartedPulling="2026-02-26 08:16:25.19153215 +0000 UTC m=+220.187469537" lastFinishedPulling="2026-02-26 08:17:12.118846689 +0000 UTC m=+267.114784096" observedRunningTime="2026-02-26 08:17:12.764382432 +0000 UTC m=+267.760319819" watchObservedRunningTime="2026-02-26 08:17:12.765272738 +0000 UTC m=+267.761210125" Feb 26 08:17:12 crc kubenswrapper[4741]: I0226 08:17:12.994913 4741 csr.go:261] certificate signing request csr-r4t7h is approved, waiting to be issued Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.007228 4741 csr.go:257] certificate signing request csr-r4t7h is issued Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.134540 4741 patch_prober.go:28] interesting pod/route-controller-manager-679c9d9ccb-6l8ng container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:41526->10.217.0.56:8443: read: connection reset by peer" start-of-body= Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.134603 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:41526->10.217.0.56:8443: read: connection reset by peer" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.715558 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-679c9d9ccb-6l8ng_cf840543-6ba0-437e-8b2d-aa7089201072/route-controller-manager/0.log" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.715947 4741 generic.go:334] "Generic (PLEG): container finished" podID="cf840543-6ba0-437e-8b2d-aa7089201072" containerID="a5afc5ebcc1dde8dea862be4f933d516c0565c11c14dbed0f228f6ea74769a7f" exitCode=255 Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.716020 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" event={"ID":"cf840543-6ba0-437e-8b2d-aa7089201072","Type":"ContainerDied","Data":"a5afc5ebcc1dde8dea862be4f933d516c0565c11c14dbed0f228f6ea74769a7f"} Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.717961 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"888ba1ec-7933-4127-a867-b5b1d7423f54","Type":"ContainerStarted","Data":"e7279af3c8c071fe60410a6e8fdb65a872c37836fbb169b204a76ae786e8724a"} Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.721345 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b39b2de-d5c9-4651-a2de-cb816a67180f","Type":"ContainerStarted","Data":"30d22615a186482239b3fba2888fc7d717e40e8ef9912691d0d8a43d26a85f5c"} Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.722562 4741 generic.go:334] "Generic (PLEG): container finished" podID="565843a6-5907-4445-9686-cb92b1a56bec" containerID="98d61524639d9639c79360f0c5b009b3778fd5d02444d8b6ff49415abea59481" exitCode=0 Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.722621 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" event={"ID":"565843a6-5907-4445-9686-cb92b1a56bec","Type":"ContainerDied","Data":"98d61524639d9639c79360f0c5b009b3778fd5d02444d8b6ff49415abea59481"} Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.725581 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" event={"ID":"70f44549-0735-41a4-ac23-2b6352c69b41","Type":"ContainerStarted","Data":"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880"} Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.725804 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" containerName="controller-manager" containerID="cri-o://39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880" gracePeriod=30 Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.726701 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.743309 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.743287649 podStartE2EDuration="7.743287649s" podCreationTimestamp="2026-02-26 08:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:13.741003153 +0000 UTC m=+268.736940550" watchObservedRunningTime="2026-02-26 08:17:13.743287649 +0000 UTC m=+268.739225036" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.748598 4741 patch_prober.go:28] interesting pod/controller-manager-586cb89fb6-q9s46 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:33234->10.217.0.57:8443: read: connection reset by peer" start-of-body= Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.748695 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:33234->10.217.0.57:8443: read: connection reset by peer" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.778741 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" podStartSLOduration=23.778704382 podStartE2EDuration="23.778704382s" podCreationTimestamp="2026-02-26 08:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:13.772881474 +0000 UTC m=+268.768818901" watchObservedRunningTime="2026-02-26 08:17:13.778704382 +0000 UTC m=+268.774641769" Feb 26 08:17:13 crc kubenswrapper[4741]: I0226 08:17:13.818437 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.818240395 podStartE2EDuration="3.818240395s" podCreationTimestamp="2026-02-26 08:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:13.813166128 +0000 UTC m=+268.809103505" watchObservedRunningTime="2026-02-26 08:17:13.818240395 +0000 UTC m=+268.814177832" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.009124 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-06 23:12:11.639814414 +0000 UTC Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.009160 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7550h54m57.630656493s for next certificate rotation Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.030652 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-679c9d9ccb-6l8ng_cf840543-6ba0-437e-8b2d-aa7089201072/route-controller-manager/0.log" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.030746 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.058747 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:14 crc kubenswrapper[4741]: E0226 08:17:14.059063 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" containerName="route-controller-manager" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.059077 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" containerName="route-controller-manager" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.059210 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" containerName="route-controller-manager" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.059646 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.075427 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.093617 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config\") pod \"cf840543-6ba0-437e-8b2d-aa7089201072\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.093698 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca\") pod \"cf840543-6ba0-437e-8b2d-aa7089201072\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.093773 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5m5h\" (UniqueName: \"kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h\") pod \"cf840543-6ba0-437e-8b2d-aa7089201072\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.093843 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert\") pod \"cf840543-6ba0-437e-8b2d-aa7089201072\" (UID: \"cf840543-6ba0-437e-8b2d-aa7089201072\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094574 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf840543-6ba0-437e-8b2d-aa7089201072" (UID: "cf840543-6ba0-437e-8b2d-aa7089201072"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094707 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094739 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094751 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config" (OuterVolumeSpecName: "config") pod "cf840543-6ba0-437e-8b2d-aa7089201072" (UID: "cf840543-6ba0-437e-8b2d-aa7089201072"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094766 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094911 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshvw\" (UniqueName: \"kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.094993 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.095009 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf840543-6ba0-437e-8b2d-aa7089201072-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.100846 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h" (OuterVolumeSpecName: "kube-api-access-w5m5h") pod "cf840543-6ba0-437e-8b2d-aa7089201072" (UID: "cf840543-6ba0-437e-8b2d-aa7089201072"). InnerVolumeSpecName "kube-api-access-w5m5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.100966 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf840543-6ba0-437e-8b2d-aa7089201072" (UID: "cf840543-6ba0-437e-8b2d-aa7089201072"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.131963 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.195601 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert\") pod \"70f44549-0735-41a4-ac23-2b6352c69b41\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.195913 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles\") pod \"70f44549-0735-41a4-ac23-2b6352c69b41\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.195942 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca\") pod \"70f44549-0735-41a4-ac23-2b6352c69b41\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.195967 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5l4n\" (UniqueName: \"kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n\") pod \"70f44549-0735-41a4-ac23-2b6352c69b41\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.195986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config\") pod \"70f44549-0735-41a4-ac23-2b6352c69b41\" (UID: \"70f44549-0735-41a4-ac23-2b6352c69b41\") " Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196099 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshvw\" (UniqueName: \"kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196158 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196180 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196203 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196257 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5m5h\" (UniqueName: \"kubernetes.io/projected/cf840543-6ba0-437e-8b2d-aa7089201072-kube-api-access-w5m5h\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196270 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf840543-6ba0-437e-8b2d-aa7089201072-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.196943 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70f44549-0735-41a4-ac23-2b6352c69b41" (UID: "70f44549-0735-41a4-ac23-2b6352c69b41"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.197046 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config" (OuterVolumeSpecName: "config") pod "70f44549-0735-41a4-ac23-2b6352c69b41" (UID: "70f44549-0735-41a4-ac23-2b6352c69b41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.197690 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.197830 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca" (OuterVolumeSpecName: "client-ca") pod "70f44549-0735-41a4-ac23-2b6352c69b41" (UID: "70f44549-0735-41a4-ac23-2b6352c69b41"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.198295 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.199943 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n" (OuterVolumeSpecName: "kube-api-access-m5l4n") pod "70f44549-0735-41a4-ac23-2b6352c69b41" (UID: "70f44549-0735-41a4-ac23-2b6352c69b41"). InnerVolumeSpecName "kube-api-access-m5l4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.202565 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.206401 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70f44549-0735-41a4-ac23-2b6352c69b41" (UID: "70f44549-0735-41a4-ac23-2b6352c69b41"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.216438 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshvw\" (UniqueName: \"kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw\") pod \"route-controller-manager-ff5d988c8-5wfml\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.297830 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.297876 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5l4n\" (UniqueName: \"kubernetes.io/projected/70f44549-0735-41a4-ac23-2b6352c69b41-kube-api-access-m5l4n\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.297891 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.297903 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44549-0735-41a4-ac23-2b6352c69b41-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.297915 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f44549-0735-41a4-ac23-2b6352c69b41-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.383648 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.734653 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-679c9d9ccb-6l8ng_cf840543-6ba0-437e-8b2d-aa7089201072/route-controller-manager/0.log" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.734776 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" event={"ID":"cf840543-6ba0-437e-8b2d-aa7089201072","Type":"ContainerDied","Data":"fdb506e90a7a20b2a136301c4b3cd8aef3703b496c3a6f266211efd6f07c0c22"} Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.734815 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.734852 4741 scope.go:117] "RemoveContainer" containerID="a5afc5ebcc1dde8dea862be4f933d516c0565c11c14dbed0f228f6ea74769a7f" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.738257 4741 generic.go:334] "Generic (PLEG): container finished" podID="888ba1ec-7933-4127-a867-b5b1d7423f54" containerID="e7279af3c8c071fe60410a6e8fdb65a872c37836fbb169b204a76ae786e8724a" exitCode=0 Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.738339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"888ba1ec-7933-4127-a867-b5b1d7423f54","Type":"ContainerDied","Data":"e7279af3c8c071fe60410a6e8fdb65a872c37836fbb169b204a76ae786e8724a"} Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.740327 4741 generic.go:334] "Generic (PLEG): container finished" podID="70f44549-0735-41a4-ac23-2b6352c69b41" containerID="39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880" exitCode=0 Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.740388 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.740543 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" event={"ID":"70f44549-0735-41a4-ac23-2b6352c69b41","Type":"ContainerDied","Data":"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880"} Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.740609 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-586cb89fb6-q9s46" event={"ID":"70f44549-0735-41a4-ac23-2b6352c69b41","Type":"ContainerDied","Data":"b09faa8686c6b61ba28a820d10934c9bd6976b51cc319c386ed2a50eb79790e9"} Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.759502 4741 scope.go:117] "RemoveContainer" containerID="39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.772936 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.775756 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679c9d9ccb-6l8ng"] Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.802443 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.807995 4741 scope.go:117] "RemoveContainer" containerID="39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.809415 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-586cb89fb6-q9s46"] Feb 26 08:17:14 crc kubenswrapper[4741]: E0226 08:17:14.810011 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880\": container with ID starting with 39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880 not found: ID does not exist" containerID="39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880" Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.810074 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880"} err="failed to get container status \"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880\": rpc error: code = NotFound desc = could not find container \"39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880\": container with ID starting with 39d9ab5176a2a3f6336fe33bbd56765df007151b59440b02cb9ca0f36fa54880 not found: ID does not exist" Feb 26 08:17:14 crc kubenswrapper[4741]: W0226 08:17:14.812318 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f20a560_1954_4ef4_aabc_fcacd84575f5.slice/crio-a126218a1586714dcbf5ac5bf4531494045064a8590006ddb04daaf7c7b6f521 WatchSource:0}: Error finding container a126218a1586714dcbf5ac5bf4531494045064a8590006ddb04daaf7c7b6f521: Status 404 returned error can't find the container with id a126218a1586714dcbf5ac5bf4531494045064a8590006ddb04daaf7c7b6f521 Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.812483 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:14 crc kubenswrapper[4741]: I0226 08:17:14.976348 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.009101 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4rxq\" (UniqueName: \"kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq\") pod \"565843a6-5907-4445-9686-cb92b1a56bec\" (UID: \"565843a6-5907-4445-9686-cb92b1a56bec\") " Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.009778 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-22 07:36:04.883469617 +0000 UTC Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.009820 4741 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6455h18m49.873653651s for next certificate rotation Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.014684 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq" (OuterVolumeSpecName: "kube-api-access-d4rxq") pod "565843a6-5907-4445-9686-cb92b1a56bec" (UID: "565843a6-5907-4445-9686-cb92b1a56bec"). InnerVolumeSpecName "kube-api-access-d4rxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.111585 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4rxq\" (UniqueName: \"kubernetes.io/projected/565843a6-5907-4445-9686-cb92b1a56bec-kube-api-access-d4rxq\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.749858 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" event={"ID":"565843a6-5907-4445-9686-cb92b1a56bec","Type":"ContainerDied","Data":"dae926d27924f18f2c634b94b448ef4bf9c6b167d41e30a282b363af48ecd108"} Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.749935 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae926d27924f18f2c634b94b448ef4bf9c6b167d41e30a282b363af48ecd108" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.749931 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534896-rcrbz" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.755713 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" event={"ID":"1f20a560-1954-4ef4-aabc-fcacd84575f5","Type":"ContainerStarted","Data":"9a0be1dfe764e2c14ced13ff37b8a0f24f4e3a3a70fbf4cd2b9a449e1057df50"} Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.755750 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" event={"ID":"1f20a560-1954-4ef4-aabc-fcacd84575f5","Type":"ContainerStarted","Data":"a126218a1586714dcbf5ac5bf4531494045064a8590006ddb04daaf7c7b6f521"} Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.756101 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.776009 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.780245 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" podStartSLOduration=5.780230708 podStartE2EDuration="5.780230708s" podCreationTimestamp="2026-02-26 08:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:15.780083304 +0000 UTC m=+270.776020731" watchObservedRunningTime="2026-02-26 08:17:15.780230708 +0000 UTC m=+270.776168095" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.825743 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" path="/var/lib/kubelet/pods/70f44549-0735-41a4-ac23-2b6352c69b41/volumes" Feb 26 08:17:15 crc kubenswrapper[4741]: I0226 08:17:15.829728 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf840543-6ba0-437e-8b2d-aa7089201072" path="/var/lib/kubelet/pods/cf840543-6ba0-437e-8b2d-aa7089201072/volumes" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.018863 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.129536 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access\") pod \"888ba1ec-7933-4127-a867-b5b1d7423f54\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.129646 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir\") pod \"888ba1ec-7933-4127-a867-b5b1d7423f54\" (UID: \"888ba1ec-7933-4127-a867-b5b1d7423f54\") " Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.129739 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "888ba1ec-7933-4127-a867-b5b1d7423f54" (UID: "888ba1ec-7933-4127-a867-b5b1d7423f54"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.130054 4741 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/888ba1ec-7933-4127-a867-b5b1d7423f54-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.136999 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "888ba1ec-7933-4127-a867-b5b1d7423f54" (UID: "888ba1ec-7933-4127-a867-b5b1d7423f54"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.231174 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/888ba1ec-7933-4127-a867-b5b1d7423f54-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.761723 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"888ba1ec-7933-4127-a867-b5b1d7423f54","Type":"ContainerDied","Data":"4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef"} Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.761772 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.761801 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4290eb7a8b5f8b57efd3567065429a07b74fc41b12c73a0509203383c87970ef" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.826996 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:16 crc kubenswrapper[4741]: E0226 08:17:16.827249 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" containerName="controller-manager" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827269 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" containerName="controller-manager" Feb 26 08:17:16 crc kubenswrapper[4741]: E0226 08:17:16.827280 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565843a6-5907-4445-9686-cb92b1a56bec" containerName="oc" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827287 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="565843a6-5907-4445-9686-cb92b1a56bec" containerName="oc" Feb 26 08:17:16 crc kubenswrapper[4741]: E0226 08:17:16.827297 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888ba1ec-7933-4127-a867-b5b1d7423f54" containerName="pruner" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827303 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="888ba1ec-7933-4127-a867-b5b1d7423f54" containerName="pruner" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827418 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f44549-0735-41a4-ac23-2b6352c69b41" containerName="controller-manager" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827434 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="565843a6-5907-4445-9686-cb92b1a56bec" containerName="oc" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827446 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="888ba1ec-7933-4127-a867-b5b1d7423f54" containerName="pruner" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.827836 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.829938 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.829938 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.830432 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.830538 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.830623 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.831434 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.837662 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.839963 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.942137 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.942560 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wr82\" (UniqueName: \"kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.942675 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.942758 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:16 crc kubenswrapper[4741]: I0226 08:17:16.942776 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.045095 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.045165 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.045209 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.045250 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wr82\" (UniqueName: \"kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.045305 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.046745 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.047624 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.048096 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.050345 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.072366 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wr82\" (UniqueName: \"kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82\") pod \"controller-manager-975456f97-s92lt\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.157793 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.358786 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:17 crc kubenswrapper[4741]: W0226 08:17:17.364337 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2545baf_cce3_4e1b_abf9_e3b13b2c1181.slice/crio-1519d75ee4d623a2c4ca071ed6c867f64a3d0c5d0eca990461665fdaff6e32e1 WatchSource:0}: Error finding container 1519d75ee4d623a2c4ca071ed6c867f64a3d0c5d0eca990461665fdaff6e32e1: Status 404 returned error can't find the container with id 1519d75ee4d623a2c4ca071ed6c867f64a3d0c5d0eca990461665fdaff6e32e1 Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.768162 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" event={"ID":"d2545baf-cce3-4e1b-abf9-e3b13b2c1181","Type":"ContainerStarted","Data":"e0021a2f8417c78b1459d649e87256f0d46e9d3a073af88db776ddaffa80501e"} Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.768467 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" event={"ID":"d2545baf-cce3-4e1b-abf9-e3b13b2c1181","Type":"ContainerStarted","Data":"1519d75ee4d623a2c4ca071ed6c867f64a3d0c5d0eca990461665fdaff6e32e1"} Feb 26 08:17:17 crc kubenswrapper[4741]: I0226 08:17:17.787263 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" podStartSLOduration=7.787245283 podStartE2EDuration="7.787245283s" podCreationTimestamp="2026-02-26 08:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:17.783573727 +0000 UTC m=+272.779511124" watchObservedRunningTime="2026-02-26 08:17:17.787245283 +0000 UTC m=+272.783182670" Feb 26 08:17:18 crc kubenswrapper[4741]: I0226 08:17:18.774471 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:18 crc kubenswrapper[4741]: I0226 08:17:18.780765 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:19 crc kubenswrapper[4741]: I0226 08:17:19.785241 4741 generic.go:334] "Generic (PLEG): container finished" podID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerID="915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7" exitCode=0 Feb 26 08:17:19 crc kubenswrapper[4741]: I0226 08:17:19.785396 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerDied","Data":"915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7"} Feb 26 08:17:20 crc kubenswrapper[4741]: I0226 08:17:20.793861 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerStarted","Data":"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962"} Feb 26 08:17:20 crc kubenswrapper[4741]: I0226 08:17:20.816121 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mwjzl" podStartSLOduration=4.772153274 podStartE2EDuration="51.816090755s" podCreationTimestamp="2026-02-26 08:16:29 +0000 UTC" firstStartedPulling="2026-02-26 08:16:33.121139772 +0000 UTC m=+228.117077159" lastFinishedPulling="2026-02-26 08:17:20.165077253 +0000 UTC m=+275.161014640" observedRunningTime="2026-02-26 08:17:20.813204062 +0000 UTC m=+275.809141469" watchObservedRunningTime="2026-02-26 08:17:20.816090755 +0000 UTC m=+275.812028142" Feb 26 08:17:23 crc kubenswrapper[4741]: I0226 08:17:23.816665 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerStarted","Data":"63e44f8fd63d7acd613397788f999b8f8e8963c842e69232d7144e9740537b15"} Feb 26 08:17:24 crc kubenswrapper[4741]: I0226 08:17:24.823566 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerStarted","Data":"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8"} Feb 26 08:17:24 crc kubenswrapper[4741]: I0226 08:17:24.826622 4741 generic.go:334] "Generic (PLEG): container finished" podID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerID="63e44f8fd63d7acd613397788f999b8f8e8963c842e69232d7144e9740537b15" exitCode=0 Feb 26 08:17:24 crc kubenswrapper[4741]: I0226 08:17:24.826674 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerDied","Data":"63e44f8fd63d7acd613397788f999b8f8e8963c842e69232d7144e9740537b15"} Feb 26 08:17:24 crc kubenswrapper[4741]: I0226 08:17:24.830759 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerStarted","Data":"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.149765 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.149855 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.149918 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.150771 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.150837 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76" gracePeriod=600 Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.839063 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerStarted","Data":"9319300f3cdfaed8746ed96d0c25efeac8e7be2335057e1ab6fdb1358519f3e0"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.845633 4741 generic.go:334] "Generic (PLEG): container finished" podID="ba490082-d248-4d24-86ea-a812f638c6f7" containerID="6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8" exitCode=0 Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.845710 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerDied","Data":"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.849009 4741 generic.go:334] "Generic (PLEG): container finished" podID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerID="e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454" exitCode=0 Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.849118 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerDied","Data":"e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.854258 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76" exitCode=0 Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.854336 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.854388 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.869776 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerStarted","Data":"2784d715d4f9ee2970dbc4c70583b9717b6e2a5a9021207ff94eabfa469edf30"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.883700 4741 generic.go:334] "Generic (PLEG): container finished" podID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerID="3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e" exitCode=0 Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.883775 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerDied","Data":"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e"} Feb 26 08:17:25 crc kubenswrapper[4741]: I0226 08:17:25.932029 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmj66" podStartSLOduration=4.108930336 podStartE2EDuration="55.932004763s" podCreationTimestamp="2026-02-26 08:16:30 +0000 UTC" firstStartedPulling="2026-02-26 08:16:33.407785926 +0000 UTC m=+228.403723313" lastFinishedPulling="2026-02-26 08:17:25.230860353 +0000 UTC m=+280.226797740" observedRunningTime="2026-02-26 08:17:25.930431428 +0000 UTC m=+280.926368815" watchObservedRunningTime="2026-02-26 08:17:25.932004763 +0000 UTC m=+280.927942150" Feb 26 08:17:26 crc kubenswrapper[4741]: I0226 08:17:26.893274 4741 generic.go:334] "Generic (PLEG): container finished" podID="f427872d-44a6-465f-b06a-8289364bab66" containerID="9319300f3cdfaed8746ed96d0c25efeac8e7be2335057e1ab6fdb1358519f3e0" exitCode=0 Feb 26 08:17:26 crc kubenswrapper[4741]: I0226 08:17:26.893350 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerDied","Data":"9319300f3cdfaed8746ed96d0c25efeac8e7be2335057e1ab6fdb1358519f3e0"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.903236 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerStarted","Data":"d4c80cefe8a45e1385b18f34a2bacd65952300dcf1cce58cff5db7777a73f9ea"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.906003 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerStarted","Data":"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.908264 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerStarted","Data":"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.910508 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerStarted","Data":"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.912662 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerStarted","Data":"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.914896 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerStarted","Data":"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192"} Feb 26 08:17:27 crc kubenswrapper[4741]: I0226 08:17:27.931285 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrtcx" podStartSLOduration=2.903958446 podStartE2EDuration="55.931244624s" podCreationTimestamp="2026-02-26 08:16:32 +0000 UTC" firstStartedPulling="2026-02-26 08:16:34.251701508 +0000 UTC m=+229.247638895" lastFinishedPulling="2026-02-26 08:17:27.278987686 +0000 UTC m=+282.274925073" observedRunningTime="2026-02-26 08:17:27.929299728 +0000 UTC m=+282.925237125" watchObservedRunningTime="2026-02-26 08:17:27.931244624 +0000 UTC m=+282.927182011" Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.013158 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9v25" podStartSLOduration=9.374518225 podStartE2EDuration="55.01313569s" podCreationTimestamp="2026-02-26 08:16:33 +0000 UTC" firstStartedPulling="2026-02-26 08:16:41.28181689 +0000 UTC m=+236.277754277" lastFinishedPulling="2026-02-26 08:17:26.920434325 +0000 UTC m=+281.916371742" observedRunningTime="2026-02-26 08:17:28.010893636 +0000 UTC m=+283.006831033" watchObservedRunningTime="2026-02-26 08:17:28.01313569 +0000 UTC m=+283.009073077" Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.014924 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bfvsg" podStartSLOduration=4.166299132 podStartE2EDuration="57.014915742s" podCreationTimestamp="2026-02-26 08:16:31 +0000 UTC" firstStartedPulling="2026-02-26 08:16:34.138697758 +0000 UTC m=+229.134635145" lastFinishedPulling="2026-02-26 08:17:26.987314348 +0000 UTC m=+281.983251755" observedRunningTime="2026-02-26 08:17:27.976346357 +0000 UTC m=+282.972283744" watchObservedRunningTime="2026-02-26 08:17:28.014915742 +0000 UTC m=+283.010853129" Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.923246 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerID="ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445" exitCode=0 Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.923307 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerDied","Data":"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445"} Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.925378 4741 generic.go:334] "Generic (PLEG): container finished" podID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerID="f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192" exitCode=0 Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.925427 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerDied","Data":"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192"} Feb 26 08:17:28 crc kubenswrapper[4741]: I0226 08:17:28.942102 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rfttb" podStartSLOduration=4.921831706 podStartE2EDuration="59.942078682s" podCreationTimestamp="2026-02-26 08:16:29 +0000 UTC" firstStartedPulling="2026-02-26 08:16:32.026374146 +0000 UTC m=+227.022311543" lastFinishedPulling="2026-02-26 08:17:27.046621122 +0000 UTC m=+282.042558519" observedRunningTime="2026-02-26 08:17:28.124716185 +0000 UTC m=+283.120653572" watchObservedRunningTime="2026-02-26 08:17:28.942078682 +0000 UTC m=+283.938016069" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.122189 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.122744 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.194412 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.194792 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" podUID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" containerName="controller-manager" containerID="cri-o://e0021a2f8417c78b1459d649e87256f0d46e9d3a073af88db776ddaffa80501e" gracePeriod=30 Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.215907 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.216231 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" podUID="1f20a560-1954-4ef4-aabc-fcacd84575f5" containerName="route-controller-manager" containerID="cri-o://9a0be1dfe764e2c14ced13ff37b8a0f24f4e3a3a70fbf4cd2b9a449e1057df50" gracePeriod=30 Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.332392 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.332471 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.556238 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.560028 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.695369 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.695421 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.741675 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.937584 4741 generic.go:334] "Generic (PLEG): container finished" podID="1f20a560-1954-4ef4-aabc-fcacd84575f5" containerID="9a0be1dfe764e2c14ced13ff37b8a0f24f4e3a3a70fbf4cd2b9a449e1057df50" exitCode=0 Feb 26 08:17:30 crc kubenswrapper[4741]: I0226 08:17:30.937732 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" event={"ID":"1f20a560-1954-4ef4-aabc-fcacd84575f5","Type":"ContainerDied","Data":"9a0be1dfe764e2c14ced13ff37b8a0f24f4e3a3a70fbf4cd2b9a449e1057df50"} Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.005096 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.052019 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.071467 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-42f2w"] Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.824948 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.951366 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" event={"ID":"d2545baf-cce3-4e1b-abf9-e3b13b2c1181","Type":"ContainerDied","Data":"e0021a2f8417c78b1459d649e87256f0d46e9d3a073af88db776ddaffa80501e"} Feb 26 08:17:31 crc kubenswrapper[4741]: I0226 08:17:31.951302 4741 generic.go:334] "Generic (PLEG): container finished" podID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" containerID="e0021a2f8417c78b1459d649e87256f0d46e9d3a073af88db776ddaffa80501e" exitCode=0 Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.089557 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.089651 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.251996 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.509780 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.510187 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.562922 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.864097 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.869917 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.895916 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f77b8bd75-lk295"] Feb 26 08:17:32 crc kubenswrapper[4741]: E0226 08:17:32.896268 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" containerName="controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.896294 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" containerName="controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: E0226 08:17:32.896334 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f20a560-1954-4ef4-aabc-fcacd84575f5" containerName="route-controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.896348 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f20a560-1954-4ef4-aabc-fcacd84575f5" containerName="route-controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.896541 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" containerName="controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.896564 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f20a560-1954-4ef4-aabc-fcacd84575f5" containerName="route-controller-manager" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.897210 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.920379 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f77b8bd75-lk295"] Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.960566 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.960545 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-975456f97-s92lt" event={"ID":"d2545baf-cce3-4e1b-abf9-e3b13b2c1181","Type":"ContainerDied","Data":"1519d75ee4d623a2c4ca071ed6c867f64a3d0c5d0eca990461665fdaff6e32e1"} Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.960757 4741 scope.go:117] "RemoveContainer" containerID="e0021a2f8417c78b1459d649e87256f0d46e9d3a073af88db776ddaffa80501e" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.963234 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.963284 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml" event={"ID":"1f20a560-1954-4ef4-aabc-fcacd84575f5","Type":"ContainerDied","Data":"a126218a1586714dcbf5ac5bf4531494045064a8590006ddb04daaf7c7b6f521"} Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.964215 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmj66" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="registry-server" containerID="cri-o://2784d715d4f9ee2970dbc4c70583b9717b6e2a5a9021207ff94eabfa469edf30" gracePeriod=2 Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996639 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config\") pod \"1f20a560-1954-4ef4-aabc-fcacd84575f5\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996711 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sshvw\" (UniqueName: \"kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw\") pod \"1f20a560-1954-4ef4-aabc-fcacd84575f5\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996776 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert\") pod \"1f20a560-1954-4ef4-aabc-fcacd84575f5\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996806 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config\") pod \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996826 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca\") pod \"1f20a560-1954-4ef4-aabc-fcacd84575f5\" (UID: \"1f20a560-1954-4ef4-aabc-fcacd84575f5\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996901 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles\") pod \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996919 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wr82\" (UniqueName: \"kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82\") pod \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996946 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert\") pod \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.996971 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca\") pod \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\" (UID: \"d2545baf-cce3-4e1b-abf9-e3b13b2c1181\") " Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.997869 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2545baf-cce3-4e1b-abf9-e3b13b2c1181" (UID: "d2545baf-cce3-4e1b-abf9-e3b13b2c1181"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.999461 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config" (OuterVolumeSpecName: "config") pod "d2545baf-cce3-4e1b-abf9-e3b13b2c1181" (UID: "d2545baf-cce3-4e1b-abf9-e3b13b2c1181"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.999492 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f20a560-1954-4ef4-aabc-fcacd84575f5" (UID: "1f20a560-1954-4ef4-aabc-fcacd84575f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:32 crc kubenswrapper[4741]: I0226 08:17:32.999579 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config" (OuterVolumeSpecName: "config") pod "1f20a560-1954-4ef4-aabc-fcacd84575f5" (UID: "1f20a560-1954-4ef4-aabc-fcacd84575f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.000560 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2545baf-cce3-4e1b-abf9-e3b13b2c1181" (UID: "d2545baf-cce3-4e1b-abf9-e3b13b2c1181"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.005878 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.006229 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.006247 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.006263 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.006276 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f20a560-1954-4ef4-aabc-fcacd84575f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.011978 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f20a560-1954-4ef4-aabc-fcacd84575f5" (UID: "1f20a560-1954-4ef4-aabc-fcacd84575f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.012121 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2545baf-cce3-4e1b-abf9-e3b13b2c1181" (UID: "d2545baf-cce3-4e1b-abf9-e3b13b2c1181"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.026491 4741 scope.go:117] "RemoveContainer" containerID="9a0be1dfe764e2c14ced13ff37b8a0f24f4e3a3a70fbf4cd2b9a449e1057df50" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.029699 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.029775 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.032282 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82" (OuterVolumeSpecName: "kube-api-access-2wr82") pod "d2545baf-cce3-4e1b-abf9-e3b13b2c1181" (UID: "d2545baf-cce3-4e1b-abf9-e3b13b2c1181"). InnerVolumeSpecName "kube-api-access-2wr82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.033639 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw" (OuterVolumeSpecName: "kube-api-access-sshvw") pod "1f20a560-1954-4ef4-aabc-fcacd84575f5" (UID: "1f20a560-1954-4ef4-aabc-fcacd84575f5"). InnerVolumeSpecName "kube-api-access-sshvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107388 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107502 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107535 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqnz\" (UniqueName: \"kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107559 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107582 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107638 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wr82\" (UniqueName: \"kubernetes.io/projected/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-kube-api-access-2wr82\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107652 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2545baf-cce3-4e1b-abf9-e3b13b2c1181-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107662 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sshvw\" (UniqueName: \"kubernetes.io/projected/1f20a560-1954-4ef4-aabc-fcacd84575f5-kube-api-access-sshvw\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.107670 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20a560-1954-4ef4-aabc-fcacd84575f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.209400 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.209492 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.209553 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqnz\" (UniqueName: \"kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.209577 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.209596 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.210901 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.211004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.211754 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.217641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.228560 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqnz\" (UniqueName: \"kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz\") pod \"controller-manager-6f77b8bd75-lk295\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.305193 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.313581 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-975456f97-s92lt"] Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.325869 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.335252 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff5d988c8-5wfml"] Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.517721 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.799267 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f20a560-1954-4ef4-aabc-fcacd84575f5" path="/var/lib/kubelet/pods/1f20a560-1954-4ef4-aabc-fcacd84575f5/volumes" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.800789 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2545baf-cce3-4e1b-abf9-e3b13b2c1181" path="/var/lib/kubelet/pods/d2545baf-cce3-4e1b-abf9-e3b13b2c1181/volumes" Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.986899 4741 generic.go:334] "Generic (PLEG): container finished" podID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerID="2784d715d4f9ee2970dbc4c70583b9717b6e2a5a9021207ff94eabfa469edf30" exitCode=0 Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.987002 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerDied","Data":"2784d715d4f9ee2970dbc4c70583b9717b6e2a5a9021207ff94eabfa469edf30"} Feb 26 08:17:33 crc kubenswrapper[4741]: I0226 08:17:33.990192 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerStarted","Data":"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d"} Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.018718 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrg9r" podStartSLOduration=3.035176014 podStartE2EDuration="1m4.018682746s" podCreationTimestamp="2026-02-26 08:16:30 +0000 UTC" firstStartedPulling="2026-02-26 08:16:32.043148508 +0000 UTC m=+227.039085895" lastFinishedPulling="2026-02-26 08:17:33.02665524 +0000 UTC m=+288.022592627" observedRunningTime="2026-02-26 08:17:34.014704211 +0000 UTC m=+289.010641628" watchObservedRunningTime="2026-02-26 08:17:34.018682746 +0000 UTC m=+289.014620153" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.038786 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f77b8bd75-lk295"] Feb 26 08:17:34 crc kubenswrapper[4741]: W0226 08:17:34.042801 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf978943b_ba62_47ac_8b9a_46931531f215.slice/crio-ab67c8a50b69943d60f68ab086b1d286ae1eef3dae4e8e62102921d00148e101 WatchSource:0}: Error finding container ab67c8a50b69943d60f68ab086b1d286ae1eef3dae4e8e62102921d00148e101: Status 404 returned error can't find the container with id ab67c8a50b69943d60f68ab086b1d286ae1eef3dae4e8e62102921d00148e101 Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.057318 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.057381 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.108613 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.222552 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.815864 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.843361 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content\") pod \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.843441 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttqd7\" (UniqueName: \"kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7\") pod \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.843560 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities\") pod \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\" (UID: \"dbf04e22-1cdf-45ca-9a69-110a53166ff6\") " Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.844619 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities" (OuterVolumeSpecName: "utilities") pod "dbf04e22-1cdf-45ca-9a69-110a53166ff6" (UID: "dbf04e22-1cdf-45ca-9a69-110a53166ff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.850726 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7" (OuterVolumeSpecName: "kube-api-access-ttqd7") pod "dbf04e22-1cdf-45ca-9a69-110a53166ff6" (UID: "dbf04e22-1cdf-45ca-9a69-110a53166ff6"). InnerVolumeSpecName "kube-api-access-ttqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.912314 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbf04e22-1cdf-45ca-9a69-110a53166ff6" (UID: "dbf04e22-1cdf-45ca-9a69-110a53166ff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.945235 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.945280 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttqd7\" (UniqueName: \"kubernetes.io/projected/dbf04e22-1cdf-45ca-9a69-110a53166ff6-kube-api-access-ttqd7\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:34 crc kubenswrapper[4741]: I0226 08:17:34.945297 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf04e22-1cdf-45ca-9a69-110a53166ff6-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.000020 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerStarted","Data":"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea"} Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.002480 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmj66" event={"ID":"dbf04e22-1cdf-45ca-9a69-110a53166ff6","Type":"ContainerDied","Data":"79954f83fe06cb21986ad6cae864dde51b73880df21677501221426266f4874b"} Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.002530 4741 scope.go:117] "RemoveContainer" containerID="2784d715d4f9ee2970dbc4c70583b9717b6e2a5a9021207ff94eabfa469edf30" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.002661 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmj66" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.008164 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.008216 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" event={"ID":"f978943b-ba62-47ac-8b9a-46931531f215","Type":"ContainerStarted","Data":"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285"} Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.008240 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" event={"ID":"f978943b-ba62-47ac-8b9a-46931531f215","Type":"ContainerStarted","Data":"ab67c8a50b69943d60f68ab086b1d286ae1eef3dae4e8e62102921d00148e101"} Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.021651 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.022674 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kss68" podStartSLOduration=9.298344083 podStartE2EDuration="1m2.022642716s" podCreationTimestamp="2026-02-26 08:16:33 +0000 UTC" firstStartedPulling="2026-02-26 08:16:41.281353906 +0000 UTC m=+236.277291293" lastFinishedPulling="2026-02-26 08:17:34.005652529 +0000 UTC m=+289.001589926" observedRunningTime="2026-02-26 08:17:35.021788022 +0000 UTC m=+290.017725419" watchObservedRunningTime="2026-02-26 08:17:35.022642716 +0000 UTC m=+290.018580114" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.029912 4741 scope.go:117] "RemoveContainer" containerID="63e44f8fd63d7acd613397788f999b8f8e8963c842e69232d7144e9740537b15" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.071100 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" podStartSLOduration=5.071076286 podStartE2EDuration="5.071076286s" podCreationTimestamp="2026-02-26 08:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:35.043603322 +0000 UTC m=+290.039540749" watchObservedRunningTime="2026-02-26 08:17:35.071076286 +0000 UTC m=+290.067013673" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.074813 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.075034 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.079580 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmj66"] Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.090172 4741 scope.go:117] "RemoveContainer" containerID="81769fb405869c84282ff8515aa900c48310c7e106c02e608d95084cf0c572af" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.802579 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" path="/var/lib/kubelet/pods/dbf04e22-1cdf-45ca-9a69-110a53166ff6/volumes" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.857456 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj"] Feb 26 08:17:35 crc kubenswrapper[4741]: E0226 08:17:35.858498 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="extract-utilities" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.858551 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="extract-utilities" Feb 26 08:17:35 crc kubenswrapper[4741]: E0226 08:17:35.858593 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="registry-server" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.858603 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="registry-server" Feb 26 08:17:35 crc kubenswrapper[4741]: E0226 08:17:35.858637 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="extract-content" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.858644 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="extract-content" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.858967 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf04e22-1cdf-45ca-9a69-110a53166ff6" containerName="registry-server" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.859761 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.864022 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.864188 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.865391 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.865402 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.865551 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.865644 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj"] Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.865678 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.967758 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.967845 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.968099 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mkb\" (UniqueName: \"kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:35 crc kubenswrapper[4741]: I0226 08:17:35.968285 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.019437 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrtcx" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="registry-server" containerID="cri-o://d4c80cefe8a45e1385b18f34a2bacd65952300dcf1cce58cff5db7777a73f9ea" gracePeriod=2 Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.070162 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.071258 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mkb\" (UniqueName: \"kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.071360 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.071390 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.071154 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.073720 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.078050 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.090627 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mkb\" (UniqueName: \"kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb\") pod \"route-controller-manager-7bd856b749-mzkzj\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.197244 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.611277 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj"] Feb 26 08:17:36 crc kubenswrapper[4741]: I0226 08:17:36.624651 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.033643 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" event={"ID":"be39e7cc-4697-4761-a7c0-93d9246d6a3a","Type":"ContainerStarted","Data":"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1"} Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.033714 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.033734 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" event={"ID":"be39e7cc-4697-4761-a7c0-93d9246d6a3a","Type":"ContainerStarted","Data":"e18ef8840bcdd663ef1de729ccb869dc4f8fcbe00e2f2049d32eee4705ac3dde"} Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.039086 4741 generic.go:334] "Generic (PLEG): container finished" podID="f427872d-44a6-465f-b06a-8289364bab66" containerID="d4c80cefe8a45e1385b18f34a2bacd65952300dcf1cce58cff5db7777a73f9ea" exitCode=0 Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.039145 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerDied","Data":"d4c80cefe8a45e1385b18f34a2bacd65952300dcf1cce58cff5db7777a73f9ea"} Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.039529 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9v25" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="registry-server" containerID="cri-o://4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9" gracePeriod=2 Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.060617 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" podStartSLOduration=7.060598075 podStartE2EDuration="7.060598075s" podCreationTimestamp="2026-02-26 08:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:17:37.059911415 +0000 UTC m=+292.055848812" watchObservedRunningTime="2026-02-26 08:17:37.060598075 +0000 UTC m=+292.056535462" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.228540 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.240995 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.292606 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js74v\" (UniqueName: \"kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v\") pod \"f427872d-44a6-465f-b06a-8289364bab66\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.292906 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content\") pod \"f427872d-44a6-465f-b06a-8289364bab66\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.292957 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities\") pod \"f427872d-44a6-465f-b06a-8289364bab66\" (UID: \"f427872d-44a6-465f-b06a-8289364bab66\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.294045 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities" (OuterVolumeSpecName: "utilities") pod "f427872d-44a6-465f-b06a-8289364bab66" (UID: "f427872d-44a6-465f-b06a-8289364bab66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.310604 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v" (OuterVolumeSpecName: "kube-api-access-js74v") pod "f427872d-44a6-465f-b06a-8289364bab66" (UID: "f427872d-44a6-465f-b06a-8289364bab66"). InnerVolumeSpecName "kube-api-access-js74v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.326451 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f427872d-44a6-465f-b06a-8289364bab66" (UID: "f427872d-44a6-465f-b06a-8289364bab66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.394348 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.394383 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427872d-44a6-465f-b06a-8289364bab66-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.394395 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js74v\" (UniqueName: \"kubernetes.io/projected/f427872d-44a6-465f-b06a-8289364bab66-kube-api-access-js74v\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.412180 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.496476 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content\") pod \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.496569 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities\") pod \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.496669 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxvj9\" (UniqueName: \"kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9\") pod \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\" (UID: \"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3\") " Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.497909 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities" (OuterVolumeSpecName: "utilities") pod "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" (UID: "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.503580 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9" (OuterVolumeSpecName: "kube-api-access-hxvj9") pod "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" (UID: "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3"). InnerVolumeSpecName "kube-api-access-hxvj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.597980 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.598023 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxvj9\" (UniqueName: \"kubernetes.io/projected/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-kube-api-access-hxvj9\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.630672 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" (UID: "3f0ad83e-7982-442d-93a3-8ba01a4e8ec3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:37 crc kubenswrapper[4741]: I0226 08:17:37.699901 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:37 crc kubenswrapper[4741]: E0226 08:17:37.866962 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0ad83e_7982_442d_93a3_8ba01a4e8ec3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf427872d_44a6_465f_b06a_8289364bab66.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.048679 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrtcx" event={"ID":"f427872d-44a6-465f-b06a-8289364bab66","Type":"ContainerDied","Data":"4ecfe0f4a1f42c48253988246ec5563f6a2d3b344097f883e59f4650438b2bf7"} Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.048777 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrtcx" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.049130 4741 scope.go:117] "RemoveContainer" containerID="d4c80cefe8a45e1385b18f34a2bacd65952300dcf1cce58cff5db7777a73f9ea" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.054363 4741 generic.go:334] "Generic (PLEG): container finished" podID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerID="4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9" exitCode=0 Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.054432 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerDied","Data":"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9"} Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.054483 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9v25" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.054489 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9v25" event={"ID":"3f0ad83e-7982-442d-93a3-8ba01a4e8ec3","Type":"ContainerDied","Data":"9bde366975824b1a1b82316c6f3c376a930f65fffd40da6c2179f591e5baaab5"} Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.076943 4741 scope.go:117] "RemoveContainer" containerID="9319300f3cdfaed8746ed96d0c25efeac8e7be2335057e1ab6fdb1358519f3e0" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.078159 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.079725 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrtcx"] Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.090436 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.093223 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9v25"] Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.125099 4741 scope.go:117] "RemoveContainer" containerID="41ebb1d7e1e1bfb928fa8211d58963521472ecd8e059bc121faff63e458ee78d" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.147987 4741 scope.go:117] "RemoveContainer" containerID="4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.168212 4741 scope.go:117] "RemoveContainer" containerID="e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.185705 4741 scope.go:117] "RemoveContainer" containerID="63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.205482 4741 scope.go:117] "RemoveContainer" containerID="4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9" Feb 26 08:17:38 crc kubenswrapper[4741]: E0226 08:17:38.206255 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9\": container with ID starting with 4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9 not found: ID does not exist" containerID="4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.206297 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9"} err="failed to get container status \"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9\": rpc error: code = NotFound desc = could not find container \"4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9\": container with ID starting with 4c04bebcd5a575617b1d8499be6ce5b523041e92df21263c56da55a1f78abbe9 not found: ID does not exist" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.206328 4741 scope.go:117] "RemoveContainer" containerID="e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454" Feb 26 08:17:38 crc kubenswrapper[4741]: E0226 08:17:38.206769 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454\": container with ID starting with e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454 not found: ID does not exist" containerID="e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.206794 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454"} err="failed to get container status \"e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454\": rpc error: code = NotFound desc = could not find container \"e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454\": container with ID starting with e837a8cb09c0771615877bb835f7c7b9858d9ae46e06c2fe2fce6e1666e1d454 not found: ID does not exist" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.206836 4741 scope.go:117] "RemoveContainer" containerID="63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233" Feb 26 08:17:38 crc kubenswrapper[4741]: E0226 08:17:38.207234 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233\": container with ID starting with 63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233 not found: ID does not exist" containerID="63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233" Feb 26 08:17:38 crc kubenswrapper[4741]: I0226 08:17:38.207305 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233"} err="failed to get container status \"63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233\": rpc error: code = NotFound desc = could not find container \"63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233\": container with ID starting with 63782ae71a4a9f6f0cdec535aa13cfb1240a20f3c4bdb1b680b1e101ff9f4233 not found: ID does not exist" Feb 26 08:17:39 crc kubenswrapper[4741]: I0226 08:17:39.797871 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" path="/var/lib/kubelet/pods/3f0ad83e-7982-442d-93a3-8ba01a4e8ec3/volumes" Feb 26 08:17:39 crc kubenswrapper[4741]: I0226 08:17:39.799176 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f427872d-44a6-465f-b06a-8289364bab66" path="/var/lib/kubelet/pods/f427872d-44a6-465f-b06a-8289364bab66/volumes" Feb 26 08:17:40 crc kubenswrapper[4741]: I0226 08:17:40.403646 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:17:40 crc kubenswrapper[4741]: I0226 08:17:40.597241 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:40 crc kubenswrapper[4741]: I0226 08:17:40.597620 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:40 crc kubenswrapper[4741]: I0226 08:17:40.656754 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:41 crc kubenswrapper[4741]: I0226 08:17:41.150920 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:43 crc kubenswrapper[4741]: I0226 08:17:43.622677 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:17:43 crc kubenswrapper[4741]: I0226 08:17:43.691357 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:17:43 crc kubenswrapper[4741]: I0226 08:17:43.691433 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:17:43 crc kubenswrapper[4741]: I0226 08:17:43.750046 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.102161 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vrg9r" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="registry-server" containerID="cri-o://19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d" gracePeriod=2 Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.165216 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.710589 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.824540 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities\") pod \"4617c569-c733-49d0-8d5f-01a69cb53e73\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.824623 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content\") pod \"4617c569-c733-49d0-8d5f-01a69cb53e73\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.824738 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4f8\" (UniqueName: \"kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8\") pod \"4617c569-c733-49d0-8d5f-01a69cb53e73\" (UID: \"4617c569-c733-49d0-8d5f-01a69cb53e73\") " Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.827444 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities" (OuterVolumeSpecName: "utilities") pod "4617c569-c733-49d0-8d5f-01a69cb53e73" (UID: "4617c569-c733-49d0-8d5f-01a69cb53e73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.832361 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8" (OuterVolumeSpecName: "kube-api-access-7j4f8") pod "4617c569-c733-49d0-8d5f-01a69cb53e73" (UID: "4617c569-c733-49d0-8d5f-01a69cb53e73"). InnerVolumeSpecName "kube-api-access-7j4f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.879123 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4617c569-c733-49d0-8d5f-01a69cb53e73" (UID: "4617c569-c733-49d0-8d5f-01a69cb53e73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.926489 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.926543 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4617c569-c733-49d0-8d5f-01a69cb53e73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:44 crc kubenswrapper[4741]: I0226 08:17:44.926563 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4f8\" (UniqueName: \"kubernetes.io/projected/4617c569-c733-49d0-8d5f-01a69cb53e73-kube-api-access-7j4f8\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.110992 4741 generic.go:334] "Generic (PLEG): container finished" podID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerID="19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d" exitCode=0 Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.111063 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerDied","Data":"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d"} Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.111195 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg9r" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.111628 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg9r" event={"ID":"4617c569-c733-49d0-8d5f-01a69cb53e73","Type":"ContainerDied","Data":"4750e59940f6220645785fff7a8d45248375cb1bb5537eb514c584689a3d8ad7"} Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.113293 4741 scope.go:117] "RemoveContainer" containerID="19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.134199 4741 scope.go:117] "RemoveContainer" containerID="f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.167229 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.167465 4741 scope.go:117] "RemoveContainer" containerID="8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.176155 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vrg9r"] Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.191769 4741 scope.go:117] "RemoveContainer" containerID="19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d" Feb 26 08:17:45 crc kubenswrapper[4741]: E0226 08:17:45.193206 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d\": container with ID starting with 19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d not found: ID does not exist" containerID="19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.193268 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d"} err="failed to get container status \"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d\": rpc error: code = NotFound desc = could not find container \"19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d\": container with ID starting with 19e79963f02397848f44050b9e06c4622d2c2ca23a4af23a0f9d23b15124505d not found: ID does not exist" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.193304 4741 scope.go:117] "RemoveContainer" containerID="f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192" Feb 26 08:17:45 crc kubenswrapper[4741]: E0226 08:17:45.193864 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192\": container with ID starting with f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192 not found: ID does not exist" containerID="f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.193956 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192"} err="failed to get container status \"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192\": rpc error: code = NotFound desc = could not find container \"f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192\": container with ID starting with f0786d6f2dc5cc4b04e495f8dfcf2a4b76105edd34cefdf58d038de1a84f5192 not found: ID does not exist" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.194039 4741 scope.go:117] "RemoveContainer" containerID="8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54" Feb 26 08:17:45 crc kubenswrapper[4741]: E0226 08:17:45.194560 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54\": container with ID starting with 8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54 not found: ID does not exist" containerID="8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.194618 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54"} err="failed to get container status \"8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54\": rpc error: code = NotFound desc = could not find container \"8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54\": container with ID starting with 8fed31c27410c469f08e21336fafa25b5665c8ed788852f9fb2cce9857cf3a54 not found: ID does not exist" Feb 26 08:17:45 crc kubenswrapper[4741]: I0226 08:17:45.795774 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" path="/var/lib/kubelet/pods/4617c569-c733-49d0-8d5f-01a69cb53e73/volumes" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.183195 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f77b8bd75-lk295"] Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.183991 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" podUID="f978943b-ba62-47ac-8b9a-46931531f215" containerName="controller-manager" containerID="cri-o://06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285" gracePeriod=30 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.283236 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj"] Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.283659 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" containerName="route-controller-manager" containerID="cri-o://a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1" gracePeriod=30 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.616351 4741 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.617427 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de" gracePeriod=15 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.617650 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6" gracePeriod=15 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.617621 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4" gracePeriod=15 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.617728 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f" gracePeriod=15 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.617560 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7" gracePeriod=15 Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.619905 4741 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620294 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620313 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620382 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620394 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620406 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620417 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620440 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620448 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620461 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620470 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620486 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620495 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620505 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620513 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620522 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620531 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620541 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620548 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620559 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620570 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="extract-utilities" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620582 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620591 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620603 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620614 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620625 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620632 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620643 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620650 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620661 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620669 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="extract-content" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620682 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620690 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620696 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620703 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.620711 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620717 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620860 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620872 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620880 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620896 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620904 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620915 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4617c569-c733-49d0-8d5f-01a69cb53e73" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620924 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620935 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0ad83e-7982-442d-93a3-8ba01a4e8ec3" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620944 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f427872d-44a6-465f-b06a-8289364bab66" containerName="registry-server" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620953 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.620962 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.621097 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.621112 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.621277 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.622876 4741 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.624141 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.632512 4741 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720311 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720378 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720410 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720436 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720479 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720505 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720537 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.720564 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.798738 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.799852 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.802772 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.803395 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.803947 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.821885 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7mkb\" (UniqueName: \"kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb\") pod \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.821974 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert\") pod \"f978943b-ba62-47ac-8b9a-46931531f215\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822004 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca\") pod \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822059 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert\") pod \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822096 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config\") pod \"f978943b-ba62-47ac-8b9a-46931531f215\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822156 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles\") pod \"f978943b-ba62-47ac-8b9a-46931531f215\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822223 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdqnz\" (UniqueName: \"kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz\") pod \"f978943b-ba62-47ac-8b9a-46931531f215\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822247 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca\") pod \"f978943b-ba62-47ac-8b9a-46931531f215\" (UID: \"f978943b-ba62-47ac-8b9a-46931531f215\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822265 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config\") pod \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\" (UID: \"be39e7cc-4697-4761-a7c0-93d9246d6a3a\") " Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822432 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822466 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822503 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822536 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822585 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822613 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822665 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822667 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822685 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822729 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.822942 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823027 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca" (OuterVolumeSpecName: "client-ca") pod "f978943b-ba62-47ac-8b9a-46931531f215" (UID: "f978943b-ba62-47ac-8b9a-46931531f215"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823249 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f978943b-ba62-47ac-8b9a-46931531f215" (UID: "f978943b-ba62-47ac-8b9a-46931531f215"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.824061 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config" (OuterVolumeSpecName: "config") pod "be39e7cc-4697-4761-a7c0-93d9246d6a3a" (UID: "be39e7cc-4697-4761-a7c0-93d9246d6a3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.824261 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config" (OuterVolumeSpecName: "config") pod "f978943b-ba62-47ac-8b9a-46931531f215" (UID: "f978943b-ba62-47ac-8b9a-46931531f215"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823616 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823682 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823851 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823663 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.823627 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.824890 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "be39e7cc-4697-4761-a7c0-93d9246d6a3a" (UID: "be39e7cc-4697-4761-a7c0-93d9246d6a3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.829704 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb" (OuterVolumeSpecName: "kube-api-access-f7mkb") pod "be39e7cc-4697-4761-a7c0-93d9246d6a3a" (UID: "be39e7cc-4697-4761-a7c0-93d9246d6a3a"). InnerVolumeSpecName "kube-api-access-f7mkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.830108 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz" (OuterVolumeSpecName: "kube-api-access-gdqnz") pod "f978943b-ba62-47ac-8b9a-46931531f215" (UID: "f978943b-ba62-47ac-8b9a-46931531f215"). InnerVolumeSpecName "kube-api-access-gdqnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.830726 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be39e7cc-4697-4761-a7c0-93d9246d6a3a" (UID: "be39e7cc-4697-4761-a7c0-93d9246d6a3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.830905 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f978943b-ba62-47ac-8b9a-46931531f215" (UID: "f978943b-ba62-47ac-8b9a-46931531f215"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924712 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f978943b-ba62-47ac-8b9a-46931531f215-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924767 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924785 4741 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be39e7cc-4697-4761-a7c0-93d9246d6a3a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924802 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924822 4741 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924841 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdqnz\" (UniqueName: \"kubernetes.io/projected/f978943b-ba62-47ac-8b9a-46931531f215-kube-api-access-gdqnz\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924863 4741 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f978943b-ba62-47ac-8b9a-46931531f215-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924882 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be39e7cc-4697-4761-a7c0-93d9246d6a3a-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.924898 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7mkb\" (UniqueName: \"kubernetes.io/projected/be39e7cc-4697-4761-a7c0-93d9246d6a3a-kube-api-access-f7mkb\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.946998 4741 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.947655 4741 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.948537 4741 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.949280 4741 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.949753 4741 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:50 crc kubenswrapper[4741]: I0226 08:17:50.949824 4741 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 08:17:50 crc kubenswrapper[4741]: E0226 08:17:50.950279 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.151700 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.161824 4741 generic.go:334] "Generic (PLEG): container finished" podID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" containerID="a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.161915 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" event={"ID":"be39e7cc-4697-4761-a7c0-93d9246d6a3a","Type":"ContainerDied","Data":"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1"} Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.161954 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" event={"ID":"be39e7cc-4697-4761-a7c0-93d9246d6a3a","Type":"ContainerDied","Data":"e18ef8840bcdd663ef1de729ccb869dc4f8fcbe00e2f2049d32eee4705ac3dde"} Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.161977 4741 scope.go:117] "RemoveContainer" containerID="a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.162104 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.163236 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.163610 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.165655 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b39b2de-d5c9-4651-a2de-cb816a67180f" containerID="30d22615a186482239b3fba2888fc7d717e40e8ef9912691d0d8a43d26a85f5c" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.165758 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b39b2de-d5c9-4651-a2de-cb816a67180f","Type":"ContainerDied","Data":"30d22615a186482239b3fba2888fc7d717e40e8ef9912691d0d8a43d26a85f5c"} Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.166669 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.167305 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.167709 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.169891 4741 generic.go:334] "Generic (PLEG): container finished" podID="f978943b-ba62-47ac-8b9a-46931531f215" containerID="06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.169973 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.169966 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" event={"ID":"f978943b-ba62-47ac-8b9a-46931531f215","Type":"ContainerDied","Data":"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285"} Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.170042 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" event={"ID":"f978943b-ba62-47ac-8b9a-46931531f215","Type":"ContainerDied","Data":"ab67c8a50b69943d60f68ab086b1d286ae1eef3dae4e8e62102921d00148e101"} Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.170938 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.171415 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.171968 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.174486 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.176288 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.177001 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.177042 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.177058 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4" exitCode=0 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.177072 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6" exitCode=2 Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.184837 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.185073 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.185316 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.189981 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.196027 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.196537 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.197786 4741 scope.go:117] "RemoveContainer" containerID="a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.200342 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1\": container with ID starting with a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1 not found: ID does not exist" containerID="a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.200433 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1"} err="failed to get container status \"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1\": rpc error: code = NotFound desc = could not find container \"a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1\": container with ID starting with a9b49476252dac5b199039e0b1e00681dc743936646e599a973fbf5e5244e4b1 not found: ID does not exist" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.200480 4741 scope.go:117] "RemoveContainer" containerID="06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.222764 4741 scope.go:117] "RemoveContainer" containerID="06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.224456 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285\": container with ID starting with 06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285 not found: ID does not exist" containerID="06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.224511 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285"} err="failed to get container status \"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285\": rpc error: code = NotFound desc = could not find container \"06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285\": container with ID starting with 06a38354e6ab56e291c81776949eea3e785c4cb0b7ff3dd5ca69468204dc2285 not found: ID does not exist" Feb 26 08:17:51 crc kubenswrapper[4741]: I0226 08:17:51.224546 4741 scope.go:117] "RemoveContainer" containerID="bc9545a221ee94b2bedd7470aafcfe8b866dcc97ecbc1005e60fbeaeef757fa6" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.346245 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:17:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:17:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:17:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:17:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:6f99fe8f155ece83937498888b07c76622c4a9d57faf85421c58e98dbe91a201\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:a0392905d5528ae4396253f0fb315540a65e9d041a23fa7204ff4c50096706ae\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1706887383},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a39473d1443594317812b9e453bc1338c8a047114ef1036a02fa1a6f727cc400\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d319aa4bb0ff5d32a48b47a7cb516d0cf980ced429c362b5180986f874da5d40\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1257183961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d0f5facf1d0e6c487de9741d96bd2ca8f5d0bd808390ab8f986f9930acbf9d13\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:48c75e09d93cb5f991aeb25a6a7331f20014fc7a025cfb1ac3ca4e65f8a525a9\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b621350662f546812f6c4d8dc3746e7f9aa73481a87621c54429ecde0129e07e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.347317 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.348363 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.348539 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.348673 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.348685 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:17:51 crc kubenswrapper[4741]: E0226 08:17:51.552842 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.188565 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 08:17:52 crc kubenswrapper[4741]: E0226 08:17:52.354195 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.532369 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.533325 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.533968 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.534525 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.546389 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access\") pod \"3b39b2de-d5c9-4651-a2de-cb816a67180f\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.546597 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock\") pod \"3b39b2de-d5c9-4651-a2de-cb816a67180f\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.546658 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir\") pod \"3b39b2de-d5c9-4651-a2de-cb816a67180f\" (UID: \"3b39b2de-d5c9-4651-a2de-cb816a67180f\") " Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.546787 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock" (OuterVolumeSpecName: "var-lock") pod "3b39b2de-d5c9-4651-a2de-cb816a67180f" (UID: "3b39b2de-d5c9-4651-a2de-cb816a67180f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.546864 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b39b2de-d5c9-4651-a2de-cb816a67180f" (UID: "3b39b2de-d5c9-4651-a2de-cb816a67180f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.547089 4741 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.547186 4741 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b39b2de-d5c9-4651-a2de-cb816a67180f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.556310 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b39b2de-d5c9-4651-a2de-cb816a67180f" (UID: "3b39b2de-d5c9-4651-a2de-cb816a67180f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.648095 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b39b2de-d5c9-4651-a2de-cb816a67180f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:52 crc kubenswrapper[4741]: I0226 08:17:52.998897 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.000687 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.001726 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.002376 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.002886 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.003201 4741 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155284 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155573 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155678 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155694 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155836 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.155947 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.156072 4741 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.156092 4741 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.204526 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.204583 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b39b2de-d5c9-4651-a2de-cb816a67180f","Type":"ContainerDied","Data":"67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af"} Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.204665 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67dedc4edf5ff4020eeb377cc8cc01e53982e5eb5ae4623b5ebe38dc758752af" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.208919 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.210002 4741 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de" exitCode=0 Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.210081 4741 scope.go:117] "RemoveContainer" containerID="17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.210328 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.229097 4741 scope.go:117] "RemoveContainer" containerID="e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.229880 4741 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.230459 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.230860 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.231507 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.242393 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.243194 4741 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.243685 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.244249 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.252742 4741 scope.go:117] "RemoveContainer" containerID="303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.257287 4741 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.271937 4741 scope.go:117] "RemoveContainer" containerID="173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.294888 4741 scope.go:117] "RemoveContainer" containerID="ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.317736 4741 scope.go:117] "RemoveContainer" containerID="96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.350290 4741 scope.go:117] "RemoveContainer" containerID="17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.351245 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\": container with ID starting with 17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f not found: ID does not exist" containerID="17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.351355 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f"} err="failed to get container status \"17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\": rpc error: code = NotFound desc = could not find container \"17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f\": container with ID starting with 17d00f44df63dd9c3f6671a6c2e3a66a047763318f6b413e1c473b984c9f0b7f not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.351391 4741 scope.go:117] "RemoveContainer" containerID="e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.352305 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\": container with ID starting with e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7 not found: ID does not exist" containerID="e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.352349 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7"} err="failed to get container status \"e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\": rpc error: code = NotFound desc = could not find container \"e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7\": container with ID starting with e1241b02c025819b4b077e5152b66d6bdd51104945635b4a92e9d16e63186de7 not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.352412 4741 scope.go:117] "RemoveContainer" containerID="303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.353868 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\": container with ID starting with 303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4 not found: ID does not exist" containerID="303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.353953 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4"} err="failed to get container status \"303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\": rpc error: code = NotFound desc = could not find container \"303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4\": container with ID starting with 303027b0d4e71bac678ce9b4108b1e30904fd8c9668929c8aaf3890099a6ecb4 not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.354193 4741 scope.go:117] "RemoveContainer" containerID="173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.354827 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\": container with ID starting with 173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6 not found: ID does not exist" containerID="173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.354909 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6"} err="failed to get container status \"173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\": rpc error: code = NotFound desc = could not find container \"173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6\": container with ID starting with 173509d6e194f243ba97b171a47ca29b6b4cb8d27cc86e446306372641e95ea6 not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.354957 4741 scope.go:117] "RemoveContainer" containerID="ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.355501 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\": container with ID starting with ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de not found: ID does not exist" containerID="ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.355537 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de"} err="failed to get container status \"ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\": rpc error: code = NotFound desc = could not find container \"ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de\": container with ID starting with ecc015e0bb417c052ff280db56024dd4e69f88980d5f4489c35c293b32f6f8de not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.355558 4741 scope.go:117] "RemoveContainer" containerID="96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.356411 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\": container with ID starting with 96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008 not found: ID does not exist" containerID="96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.356450 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008"} err="failed to get container status \"96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\": rpc error: code = NotFound desc = could not find container \"96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008\": container with ID starting with 96da1ec283a9320164e1986768df27af5318357d97547f882ef007b665290008 not found: ID does not exist" Feb 26 08:17:53 crc kubenswrapper[4741]: I0226 08:17:53.800980 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 08:17:53 crc kubenswrapper[4741]: E0226 08:17:53.955036 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="3.2s" Feb 26 08:17:55 crc kubenswrapper[4741]: E0226 08:17:55.658376 4741 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:55 crc kubenswrapper[4741]: I0226 08:17:55.659088 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:55 crc kubenswrapper[4741]: W0226 08:17:55.694347 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-213fae85828755adac3144c33af663fa6ba694e367c75c94ad2d85802fd12a85 WatchSource:0}: Error finding container 213fae85828755adac3144c33af663fa6ba694e367c75c94ad2d85802fd12a85: Status 404 returned error can't find the container with id 213fae85828755adac3144c33af663fa6ba694e367c75c94ad2d85802fd12a85 Feb 26 08:17:55 crc kubenswrapper[4741]: E0226 08:17:55.701391 4741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897bdfdb1be8bfb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:17:55.700009979 +0000 UTC m=+310.695947396,LastTimestamp:2026-02-26 08:17:55.700009979 +0000 UTC m=+310.695947396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:17:55 crc kubenswrapper[4741]: I0226 08:17:55.793707 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:55 crc kubenswrapper[4741]: I0226 08:17:55.794562 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:55 crc kubenswrapper[4741]: I0226 08:17:55.795198 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.115793 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" containerID="cri-o://03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0" gracePeriod=15 Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.237818 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04"} Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.237893 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"213fae85828755adac3144c33af663fa6ba694e367c75c94ad2d85802fd12a85"} Feb 26 08:17:56 crc kubenswrapper[4741]: E0226 08:17:56.239365 4741 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.239385 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.239819 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.240224 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.522166 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.523148 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.523488 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.523885 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.524567 4741 status_manager.go:851] "Failed to get status for pod" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-42f2w\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.628009 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.628091 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.628167 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.628230 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.628304 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629214 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629284 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629311 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629335 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4r9s\" (UniqueName: \"kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629354 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629384 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629419 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629442 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629462 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629485 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies\") pod \"98312bcf-f7e9-4868-904a-c27e825ce830\" (UID: \"98312bcf-f7e9-4868-904a-c27e825ce830\") " Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.629670 4741 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98312bcf-f7e9-4868-904a-c27e825ce830-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.630508 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.630546 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.630654 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.630771 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.636325 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.636389 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s" (OuterVolumeSpecName: "kube-api-access-m4r9s") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "kube-api-access-m4r9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.636655 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.636834 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.637017 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.637482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.637600 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.638044 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.638174 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "98312bcf-f7e9-4868-904a-c27e825ce830" (UID: "98312bcf-f7e9-4868-904a-c27e825ce830"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.731241 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.732844 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.732949 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733046 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733156 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4r9s\" (UniqueName: \"kubernetes.io/projected/98312bcf-f7e9-4868-904a-c27e825ce830-kube-api-access-m4r9s\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733269 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733488 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733589 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733672 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733753 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.733855 4741 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98312bcf-f7e9-4868-904a-c27e825ce830-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.734018 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:56 crc kubenswrapper[4741]: I0226 08:17:56.734168 4741 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98312bcf-f7e9-4868-904a-c27e825ce830-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:17:57 crc kubenswrapper[4741]: E0226 08:17:57.156338 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="6.4s" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.252428 4741 generic.go:334] "Generic (PLEG): container finished" podID="98312bcf-f7e9-4868-904a-c27e825ce830" containerID="03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0" exitCode=0 Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.252491 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" event={"ID":"98312bcf-f7e9-4868-904a-c27e825ce830","Type":"ContainerDied","Data":"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0"} Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.252530 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" event={"ID":"98312bcf-f7e9-4868-904a-c27e825ce830","Type":"ContainerDied","Data":"f500bb91e4348a0c6191d774b72f7a8493f55e0f8d256475c7fb4968e2363c9f"} Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.252554 4741 scope.go:117] "RemoveContainer" containerID="03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.252698 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.254415 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.255458 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.256400 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.256871 4741 status_manager.go:851] "Failed to get status for pod" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-42f2w\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.284238 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.285021 4741 status_manager.go:851] "Failed to get status for pod" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-42f2w\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.285606 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.286060 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.292102 4741 scope.go:117] "RemoveContainer" containerID="03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0" Feb 26 08:17:57 crc kubenswrapper[4741]: E0226 08:17:57.292710 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0\": container with ID starting with 03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0 not found: ID does not exist" containerID="03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0" Feb 26 08:17:57 crc kubenswrapper[4741]: I0226 08:17:57.292778 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0"} err="failed to get container status \"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0\": rpc error: code = NotFound desc = could not find container \"03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0\": container with ID starting with 03180b270efe2130e44062d7a3422c4e9c9f5a019e90f700a83b945c53ebffd0 not found: ID does not exist" Feb 26 08:17:57 crc kubenswrapper[4741]: E0226 08:17:57.988600 4741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897bdfdb1be8bfb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 08:17:55.700009979 +0000 UTC m=+310.695947396,LastTimestamp:2026-02-26 08:17:55.700009979 +0000 UTC m=+310.695947396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.593534 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:18:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:18:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:18:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T08:18:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:6f99fe8f155ece83937498888b07c76622c4a9d57faf85421c58e98dbe91a201\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:a0392905d5528ae4396253f0fb315540a65e9d041a23fa7204ff4c50096706ae\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1706887383},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a39473d1443594317812b9e453bc1338c8a047114ef1036a02fa1a6f727cc400\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d319aa4bb0ff5d32a48b47a7cb516d0cf980ced429c362b5180986f874da5d40\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1257183961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d0f5facf1d0e6c487de9741d96bd2ca8f5d0bd808390ab8f986f9930acbf9d13\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:48c75e09d93cb5f991aeb25a6a7331f20014fc7a025cfb1ac3ca4e65f8a525a9\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b621350662f546812f6c4d8dc3746e7f9aa73481a87621c54429ecde0129e07e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.595793 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.596423 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.596778 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.597091 4741 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:01 crc kubenswrapper[4741]: E0226 08:18:01.597173 4741 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.786616 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.787816 4741 status_manager.go:851] "Failed to get status for pod" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-42f2w\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.788021 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.788239 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.788515 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.803691 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.803777 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:02 crc kubenswrapper[4741]: E0226 08:18:02.804644 4741 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:02 crc kubenswrapper[4741]: I0226 08:18:02.805386 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.297808 4741 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="779e78b82b3c0c272c778bb9a2c715df8a45458728cd9ddb714f72d9fc6ea3b8" exitCode=0 Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.297861 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"779e78b82b3c0c272c778bb9a2c715df8a45458728cd9ddb714f72d9fc6ea3b8"} Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.297894 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"08128f97937a537ea746f86b4e09b8c1ac4cbc98445fb6727df85a1ab8fa3e7c"} Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.298399 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.298438 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.298786 4741 status_manager.go:851] "Failed to get status for pod" podUID="f978943b-ba62-47ac-8b9a-46931531f215" pod="openshift-controller-manager/controller-manager-6f77b8bd75-lk295" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f77b8bd75-lk295\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:03 crc kubenswrapper[4741]: E0226 08:18:03.299008 4741 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.299463 4741 status_manager.go:851] "Failed to get status for pod" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.299941 4741 status_manager.go:851] "Failed to get status for pod" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" pod="openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7bd856b749-mzkzj\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.300313 4741 status_manager.go:851] "Failed to get status for pod" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" pod="openshift-authentication/oauth-openshift-558db77b4-42f2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-42f2w\": dial tcp 38.102.83.166:6443: connect: connection refused" Feb 26 08:18:03 crc kubenswrapper[4741]: E0226 08:18:03.557597 4741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="7s" Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.625203 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 08:18:03 crc kubenswrapper[4741]: I0226 08:18:03.625278 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.309575 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fffc07f30218c801b74b41dd229d5038871fe5b8fd5b0daf9c2f8ba079529155"} Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.309628 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"047590e1318c2216bb8c90d629ee511329b5b37cec40916fb90930c156de0dac"} Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.309639 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b134cdea95c8f86e476170969b2cddbf71ac4dac59513e69d2b4b3edd5c30e2f"} Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.316541 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.317731 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.317775 4741 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c" exitCode=1 Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.317800 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c"} Feb 26 08:18:04 crc kubenswrapper[4741]: I0226 08:18:04.318327 4741 scope.go:117] "RemoveContainer" containerID="6f678310cdab169578ece858453f95793b19b86da604b5dfd50f0c20b0701b5c" Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.329038 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.330258 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.330394 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"60f38cc329014eebe1bd1440850aaf4096b9303e71b7b3d04cf5c129ee721512"} Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.337215 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5b62f596719a4795d3ccb1da048a714ab437fd637058c8290bddad5a1bdb2a5"} Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.337385 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.337403 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9493b9d188228d5d3d710df45309df29cc175f20abdca51ece31d63184de528d"} Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.337476 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:05 crc kubenswrapper[4741]: I0226 08:18:05.337492 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:07 crc kubenswrapper[4741]: I0226 08:18:07.806080 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:07 crc kubenswrapper[4741]: I0226 08:18:07.806167 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:07 crc kubenswrapper[4741]: I0226 08:18:07.815323 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:10 crc kubenswrapper[4741]: I0226 08:18:10.351459 4741 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:10 crc kubenswrapper[4741]: I0226 08:18:10.394073 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:10 crc kubenswrapper[4741]: I0226 08:18:10.394147 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:10 crc kubenswrapper[4741]: I0226 08:18:10.400293 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:10 crc kubenswrapper[4741]: I0226 08:18:10.404258 4741 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1824e4f2-9c69-4750-ab92-85ebe65e3d99" Feb 26 08:18:11 crc kubenswrapper[4741]: I0226 08:18:11.407535 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:11 crc kubenswrapper[4741]: I0226 08:18:11.408175 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:11 crc kubenswrapper[4741]: I0226 08:18:11.444812 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:18:11 crc kubenswrapper[4741]: I0226 08:18:11.453017 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:18:12 crc kubenswrapper[4741]: I0226 08:18:12.415304 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:18:15 crc kubenswrapper[4741]: I0226 08:18:15.828289 4741 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1824e4f2-9c69-4750-ab92-85ebe65e3d99" Feb 26 08:18:19 crc kubenswrapper[4741]: I0226 08:18:19.918742 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 08:18:20 crc kubenswrapper[4741]: I0226 08:18:20.623516 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.164274 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.759064 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.799044 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.816399 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.824998 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 08:18:21 crc kubenswrapper[4741]: I0226 08:18:21.872178 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 08:18:22 crc kubenswrapper[4741]: I0226 08:18:22.345786 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 08:18:22 crc kubenswrapper[4741]: I0226 08:18:22.456956 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 08:18:22 crc kubenswrapper[4741]: I0226 08:18:22.656959 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 08:18:22 crc kubenswrapper[4741]: I0226 08:18:22.773388 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 08:18:22 crc kubenswrapper[4741]: I0226 08:18:22.841899 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.169923 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.516913 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.561314 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.630557 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.649765 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.667532 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.932905 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.990877 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 08:18:23 crc kubenswrapper[4741]: I0226 08:18:23.996235 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.079655 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.152523 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.161438 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.219808 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.305667 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.336162 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.410813 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.411722 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.492678 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.521378 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.531548 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.564705 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.669144 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.800574 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.935410 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 08:18:24 crc kubenswrapper[4741]: I0226 08:18:24.998449 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.122672 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.127925 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.158485 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.281759 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.303256 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.310473 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.316531 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.387592 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.521614 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 08:18:25 crc kubenswrapper[4741]: I0226 08:18:25.859473 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.123460 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.174639 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.184703 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.273055 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.463385 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.468695 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.538778 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.613791 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.751661 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.755073 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.761719 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.795818 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.828866 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.871345 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.914299 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.930189 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.971225 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 08:18:26 crc kubenswrapper[4741]: I0226 08:18:26.978152 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.019486 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.036588 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.152038 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.318727 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.425913 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.604477 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.625670 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.650146 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.651652 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.693179 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.846309 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.901914 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 08:18:27 crc kubenswrapper[4741]: I0226 08:18:27.974821 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.017582 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.025179 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.106179 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.187302 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.350425 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.370777 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.454076 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.536622 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.675028 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.695252 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.724478 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.738538 4741 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 08:18:28 crc kubenswrapper[4741]: I0226 08:18:28.848240 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.030810 4741 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.068953 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.074355 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.081139 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.087540 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.159901 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.180573 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.227145 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.376986 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.415907 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.422279 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.447903 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.475878 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.481210 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.525614 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.575354 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.752927 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.789672 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.880607 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.881888 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.932017 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 08:18:29 crc kubenswrapper[4741]: I0226 08:18:29.999848 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.077750 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.175692 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.269576 4741 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.271616 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.289741 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.423301 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.538392 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.568450 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.675089 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.681625 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.826049 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 08:18:30 crc kubenswrapper[4741]: I0226 08:18:30.927485 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.004315 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.037623 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.080578 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.111858 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.144741 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.164547 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.228335 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.234263 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.300639 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.304740 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.439534 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.452268 4741 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.480783 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.481344 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.491284 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.494588 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.571884 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.577943 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.636957 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.639280 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.657870 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.669369 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.677782 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.716530 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.720497 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 08:18:31 crc kubenswrapper[4741]: I0226 08:18:31.946806 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.023716 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.075252 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.078205 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.079735 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.200670 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.327404 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.358043 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.423001 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.545980 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.592883 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.729512 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.787098 4741 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.819735 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.840292 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.845275 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.883076 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 08:18:32 crc kubenswrapper[4741]: I0226 08:18:32.939546 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.149765 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.165147 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.480324 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.500297 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.579973 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.618901 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.873157 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.878908 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.881682 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 08:18:33 crc kubenswrapper[4741]: I0226 08:18:33.981025 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.121004 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.210000 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.270330 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.391793 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.478882 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.647809 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.667962 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.763987 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.777412 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.780267 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.805271 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.844289 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.862871 4741 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.871459 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-7bd856b749-mzkzj","openshift-authentication/oauth-openshift-558db77b4-42f2w","openshift-controller-manager/controller-manager-6f77b8bd75-lk295"] Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.871598 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534898-rr6lj","openshift-controller-manager/controller-manager-66875b6555-b4bl8","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs","openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8"] Feb 26 08:18:34 crc kubenswrapper[4741]: E0226 08:18:34.871938 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f978943b-ba62-47ac-8b9a-46931531f215" containerName="controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.871964 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f978943b-ba62-47ac-8b9a-46931531f215" containerName="controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: E0226 08:18:34.871992 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872006 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" Feb 26 08:18:34 crc kubenswrapper[4741]: E0226 08:18:34.872031 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" containerName="installer" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872044 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" containerName="installer" Feb 26 08:18:34 crc kubenswrapper[4741]: E0226 08:18:34.872064 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" containerName="route-controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872076 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" containerName="route-controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872149 4741 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872185 4741 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ea009dd-67d3-42fb-bd36-bad6d9eadd31" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872287 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b39b2de-d5c9-4651-a2de-cb816a67180f" containerName="installer" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872323 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" containerName="route-controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872341 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" containerName="oauth-openshift" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.872358 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f978943b-ba62-47ac-8b9a-46931531f215" containerName="controller-manager" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.873102 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.876200 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.876989 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.877552 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.880888 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.880923 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881181 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881335 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881398 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881484 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881735 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881847 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.881973 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.882129 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.882233 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.882484 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.883228 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.883886 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.883946 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.884365 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.885465 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.885689 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.885812 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.886167 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.887067 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.887069 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.887508 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.887606 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.887635 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.888292 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.888455 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.890202 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.898907 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.899292 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.903369 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.909162 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.940618 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 08:18:34 crc kubenswrapper[4741]: I0226 08:18:34.967690 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.967646589 podStartE2EDuration="24.967646589s" podCreationTimestamp="2026-02-26 08:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:18:34.965361894 +0000 UTC m=+349.961299291" watchObservedRunningTime="2026-02-26 08:18:34.967646589 +0000 UTC m=+349.963583996" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010041 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-config\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010125 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-config\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010163 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-proxy-ca-bundles\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010196 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-dir\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010219 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010243 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-serving-cert\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010266 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-session\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010290 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-client-ca\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010314 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010344 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010367 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a45d3c75-9707-4363-8095-15c7702c3083-serving-cert\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010392 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4vj\" (UniqueName: \"kubernetes.io/projected/42aebfcc-7921-46c7-a085-4bb8c46042f7-kube-api-access-6k4vj\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010415 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgvl\" (UniqueName: \"kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl\") pod \"auto-csr-approver-29534898-rr6lj\" (UID: \"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d\") " pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010470 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-policies\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010506 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010530 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-client-ca\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010568 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010592 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mfh\" (UniqueName: \"kubernetes.io/projected/a45d3c75-9707-4363-8095-15c7702c3083-kube-api-access-n6mfh\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010637 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010662 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010697 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkq2\" (UniqueName: \"kubernetes.io/projected/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-kube-api-access-8wkq2\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010721 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.010751 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.038784 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.041819 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114649 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114746 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mfh\" (UniqueName: \"kubernetes.io/projected/a45d3c75-9707-4363-8095-15c7702c3083-kube-api-access-n6mfh\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114799 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114836 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114870 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkq2\" (UniqueName: \"kubernetes.io/projected/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-kube-api-access-8wkq2\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114901 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114936 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.114988 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-config\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115022 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-config\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115052 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-proxy-ca-bundles\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115086 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-dir\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115136 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115168 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-session\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115190 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-serving-cert\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115238 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-client-ca\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115266 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115299 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115327 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a45d3c75-9707-4363-8095-15c7702c3083-serving-cert\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115357 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k4vj\" (UniqueName: \"kubernetes.io/projected/42aebfcc-7921-46c7-a085-4bb8c46042f7-kube-api-access-6k4vj\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115391 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115420 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgvl\" (UniqueName: \"kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl\") pod \"auto-csr-approver-29534898-rr6lj\" (UID: \"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d\") " pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115428 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-dir\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115453 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-policies\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115602 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115651 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-client-ca\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.115948 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.116541 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-audit-policies\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.117207 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.117245 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-client-ca\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.117786 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.120933 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-proxy-ca-bundles\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.123357 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-config\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.124538 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-client-ca\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.133965 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d3c75-9707-4363-8095-15c7702c3083-config\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.135055 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.135207 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.135353 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-session\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.136168 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.136295 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.137549 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mfh\" (UniqueName: \"kubernetes.io/projected/a45d3c75-9707-4363-8095-15c7702c3083-kube-api-access-n6mfh\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.140530 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.141153 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.143581 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgvl\" (UniqueName: \"kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl\") pod \"auto-csr-approver-29534898-rr6lj\" (UID: \"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d\") " pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.144000 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k4vj\" (UniqueName: \"kubernetes.io/projected/42aebfcc-7921-46c7-a085-4bb8c46042f7-kube-api-access-6k4vj\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.144405 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42aebfcc-7921-46c7-a085-4bb8c46042f7-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd87b5cc7-nr8cs\" (UID: \"42aebfcc-7921-46c7-a085-4bb8c46042f7\") " pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.150767 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a45d3c75-9707-4363-8095-15c7702c3083-serving-cert\") pod \"route-controller-manager-5db6ccf457-gnnh8\" (UID: \"a45d3c75-9707-4363-8095-15c7702c3083\") " pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.154451 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkq2\" (UniqueName: \"kubernetes.io/projected/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-kube-api-access-8wkq2\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.154913 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65df98f3-85ae-481f-9dd5-8c0ff79bb7b8-serving-cert\") pod \"controller-manager-66875b6555-b4bl8\" (UID: \"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8\") " pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.162556 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.197842 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.211295 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.219769 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.229537 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.427977 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.455776 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.606387 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.610593 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.630143 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8"] Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.636790 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.687918 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534898-rr6lj"] Feb 26 08:18:35 crc kubenswrapper[4741]: W0226 08:18:35.703205 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8c8f9fc_dd53_4233_97f6_ba5b2fdf0a1d.slice/crio-161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4 WatchSource:0}: Error finding container 161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4: Status 404 returned error can't find the container with id 161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4 Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.705535 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66875b6555-b4bl8"] Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.713484 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs"] Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.820742 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98312bcf-f7e9-4868-904a-c27e825ce830" path="/var/lib/kubelet/pods/98312bcf-f7e9-4868-904a-c27e825ce830/volumes" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.821980 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be39e7cc-4697-4761-a7c0-93d9246d6a3a" path="/var/lib/kubelet/pods/be39e7cc-4697-4761-a7c0-93d9246d6a3a/volumes" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.823288 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f978943b-ba62-47ac-8b9a-46931531f215" path="/var/lib/kubelet/pods/f978943b-ba62-47ac-8b9a-46931531f215/volumes" Feb 26 08:18:35 crc kubenswrapper[4741]: I0226 08:18:35.889509 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.193861 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.222175 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.507599 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.520440 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.578057 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" event={"ID":"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d","Type":"ContainerStarted","Data":"161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.579633 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" event={"ID":"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8","Type":"ContainerStarted","Data":"5140af5039da0fab8a1a7c5d4663494ced7bc8344b10d2b6126e077e9605f69b"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.579662 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" event={"ID":"65df98f3-85ae-481f-9dd5-8c0ff79bb7b8","Type":"ContainerStarted","Data":"928da7f0c7467568d15dcbca68fc3577de173be7c12b7b53e9221c75b3af7d80"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.580574 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.581940 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" event={"ID":"a45d3c75-9707-4363-8095-15c7702c3083","Type":"ContainerStarted","Data":"b486092cba0f69fa17a2c233bb4cc2a948e9c194ef03d98265cd7dacc70c6218"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.581968 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" event={"ID":"a45d3c75-9707-4363-8095-15c7702c3083","Type":"ContainerStarted","Data":"90fe9556d2e0252fa25b00808fa1d6027184c024d75eb77ef8c9095d536aed6f"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.588018 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" event={"ID":"42aebfcc-7921-46c7-a085-4bb8c46042f7","Type":"ContainerStarted","Data":"1fe63b843080fa70cf0819e7d877ccb65492e060f2889115e87d3e912d070ba8"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.588092 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" event={"ID":"42aebfcc-7921-46c7-a085-4bb8c46042f7","Type":"ContainerStarted","Data":"3378ef09f724ff8b9af336ca9bc64bb7388d4215ebca9ed9ef463231cc2e19db"} Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.588178 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.588492 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.606184 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" podStartSLOduration=46.606097624 podStartE2EDuration="46.606097624s" podCreationTimestamp="2026-02-26 08:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:18:36.599306782 +0000 UTC m=+351.595244179" watchObservedRunningTime="2026-02-26 08:18:36.606097624 +0000 UTC m=+351.602035011" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.619065 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.645947 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" podStartSLOduration=65.645927628 podStartE2EDuration="1m5.645927628s" podCreationTimestamp="2026-02-26 08:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:18:36.625857502 +0000 UTC m=+351.621794899" watchObservedRunningTime="2026-02-26 08:18:36.645927628 +0000 UTC m=+351.641865015" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.648436 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" podStartSLOduration=46.648427979 podStartE2EDuration="46.648427979s" podCreationTimestamp="2026-02-26 08:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:18:36.645543158 +0000 UTC m=+351.641480545" watchObservedRunningTime="2026-02-26 08:18:36.648427979 +0000 UTC m=+351.644365366" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.693383 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.713600 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.733719 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.778625 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.913420 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 08:18:36 crc kubenswrapper[4741]: I0226 08:18:36.976610 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.114587 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.140221 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.140282 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.356864 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.472164 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.599655 4741 generic.go:334] "Generic (PLEG): container finished" podID="e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" containerID="dc1d4b761bb2eeac272e256a6a3c51402c86c237fbceb2a42b2bc6cd7e348d83" exitCode=0 Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.599810 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" event={"ID":"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d","Type":"ContainerDied","Data":"dc1d4b761bb2eeac272e256a6a3c51402c86c237fbceb2a42b2bc6cd7e348d83"} Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.600680 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.610270 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.619169 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.828208 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.840929 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 08:18:37 crc kubenswrapper[4741]: I0226 08:18:37.895807 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 08:18:38 crc kubenswrapper[4741]: I0226 08:18:38.293775 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 08:18:38 crc kubenswrapper[4741]: I0226 08:18:38.303074 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 08:18:38 crc kubenswrapper[4741]: I0226 08:18:38.971217 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.171647 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xgvl\" (UniqueName: \"kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl\") pod \"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d\" (UID: \"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d\") " Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.181512 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl" (OuterVolumeSpecName: "kube-api-access-7xgvl") pod "e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" (UID: "e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d"). InnerVolumeSpecName "kube-api-access-7xgvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.273582 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xgvl\" (UniqueName: \"kubernetes.io/projected/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d-kube-api-access-7xgvl\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.562763 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.615276 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.615347 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534898-rr6lj" event={"ID":"e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d","Type":"ContainerDied","Data":"161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4"} Feb 26 08:18:39 crc kubenswrapper[4741]: I0226 08:18:39.615386 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161a3af788618f91bb45ccf8b52a799953dd5b94f16312f24766565643a319a4" Feb 26 08:18:40 crc kubenswrapper[4741]: I0226 08:18:40.226869 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 08:18:44 crc kubenswrapper[4741]: I0226 08:18:44.116668 4741 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 08:18:44 crc kubenswrapper[4741]: I0226 08:18:44.117264 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04" gracePeriod=5 Feb 26 08:18:49 crc kubenswrapper[4741]: E0226 08:18:49.239288 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-conmon-07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.676659 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.676739 4741 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04" exitCode=137 Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.736745 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.737198 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.927205 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.927835 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.928054 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.928291 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.928491 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.927365 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.927889 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.928158 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.928351 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.929697 4741 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.930395 4741 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.930551 4741 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.930692 4741 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:49 crc kubenswrapper[4741]: I0226 08:18:49.939682 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:18:50 crc kubenswrapper[4741]: I0226 08:18:50.033554 4741 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 08:18:50 crc kubenswrapper[4741]: I0226 08:18:50.685216 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 08:18:50 crc kubenswrapper[4741]: I0226 08:18:50.685311 4741 scope.go:117] "RemoveContainer" containerID="07da2798bafd7540dbc32fad4b260ff3320f2fe51b8f9a15ac63909a8c32da04" Feb 26 08:18:50 crc kubenswrapper[4741]: I0226 08:18:50.685406 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 08:18:51 crc kubenswrapper[4741]: I0226 08:18:51.795026 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 08:19:25 crc kubenswrapper[4741]: I0226 08:19:25.149594 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:19:25 crc kubenswrapper[4741]: I0226 08:19:25.150170 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.064517 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-58cgc"] Feb 26 08:19:39 crc kubenswrapper[4741]: E0226 08:19:39.065663 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.065682 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 08:19:39 crc kubenswrapper[4741]: E0226 08:19:39.065699 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" containerName="oc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.065708 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" containerName="oc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.065869 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" containerName="oc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.065885 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.066471 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.076692 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-58cgc"] Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149246 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqxk\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-kube-api-access-mrqxk\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149318 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-bound-sa-token\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149350 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149475 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-certificates\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149654 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-tls\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149758 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.149833 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-trusted-ca\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.150077 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.184278 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252004 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqxk\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-kube-api-access-mrqxk\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252063 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-bound-sa-token\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252103 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-certificates\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252143 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-tls\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252163 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.252186 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-trusted-ca\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.253778 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-trusted-ca\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.253986 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.254891 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-certificates\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.265820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.267171 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-registry-tls\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.269929 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-bound-sa-token\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.272547 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqxk\" (UniqueName: \"kubernetes.io/projected/aa5bbcf2-6f44-42fe-b99b-50d222ce35ba-kube-api-access-mrqxk\") pod \"image-registry-66df7c8f76-58cgc\" (UID: \"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.390322 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:39 crc kubenswrapper[4741]: I0226 08:19:39.858837 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-58cgc"] Feb 26 08:19:40 crc kubenswrapper[4741]: I0226 08:19:40.038178 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" event={"ID":"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba","Type":"ContainerStarted","Data":"fd5ea22ddd94c7e9606a7ae19ce11d3a15ed9a0ae2423f86acb300c6bc65a3fa"} Feb 26 08:19:41 crc kubenswrapper[4741]: I0226 08:19:41.062509 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" event={"ID":"aa5bbcf2-6f44-42fe-b99b-50d222ce35ba","Type":"ContainerStarted","Data":"f372190546740b79b93cb4a4210314fc57d9725aba19e9ba06ba0f1b769a5771"} Feb 26 08:19:41 crc kubenswrapper[4741]: I0226 08:19:41.062934 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:41 crc kubenswrapper[4741]: I0226 08:19:41.091798 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" podStartSLOduration=2.091770209 podStartE2EDuration="2.091770209s" podCreationTimestamp="2026-02-26 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:19:41.084527801 +0000 UTC m=+416.080465198" watchObservedRunningTime="2026-02-26 08:19:41.091770209 +0000 UTC m=+416.087707596" Feb 26 08:19:55 crc kubenswrapper[4741]: I0226 08:19:55.149738 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:19:55 crc kubenswrapper[4741]: I0226 08:19:55.150719 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:19:59 crc kubenswrapper[4741]: I0226 08:19:59.397897 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" Feb 26 08:19:59 crc kubenswrapper[4741]: I0226 08:19:59.479288 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.136912 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534900-dbfxr"] Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.138160 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.145784 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.146126 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.146301 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.147732 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534900-dbfxr"] Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.278464 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmb9d\" (UniqueName: \"kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d\") pod \"auto-csr-approver-29534900-dbfxr\" (UID: \"9ba7b413-c1cc-42a1-82d0-7b60ef85568c\") " pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.380140 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmb9d\" (UniqueName: \"kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d\") pod \"auto-csr-approver-29534900-dbfxr\" (UID: \"9ba7b413-c1cc-42a1-82d0-7b60ef85568c\") " pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.403000 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmb9d\" (UniqueName: \"kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d\") pod \"auto-csr-approver-29534900-dbfxr\" (UID: \"9ba7b413-c1cc-42a1-82d0-7b60ef85568c\") " pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.462888 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:00 crc kubenswrapper[4741]: I0226 08:20:00.909156 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534900-dbfxr"] Feb 26 08:20:00 crc kubenswrapper[4741]: W0226 08:20:00.915777 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ba7b413_c1cc_42a1_82d0_7b60ef85568c.slice/crio-91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c WatchSource:0}: Error finding container 91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c: Status 404 returned error can't find the container with id 91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c Feb 26 08:20:01 crc kubenswrapper[4741]: I0226 08:20:01.261594 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" event={"ID":"9ba7b413-c1cc-42a1-82d0-7b60ef85568c","Type":"ContainerStarted","Data":"91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c"} Feb 26 08:20:02 crc kubenswrapper[4741]: I0226 08:20:02.268580 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" event={"ID":"9ba7b413-c1cc-42a1-82d0-7b60ef85568c","Type":"ContainerStarted","Data":"1b7e54b68fbd0f7bdc17500e3e42ece215d1ef3b9c8e6b5fa24cdb261d2bd0eb"} Feb 26 08:20:02 crc kubenswrapper[4741]: I0226 08:20:02.283458 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" podStartSLOduration=1.442663193 podStartE2EDuration="2.283433968s" podCreationTimestamp="2026-02-26 08:20:00 +0000 UTC" firstStartedPulling="2026-02-26 08:20:00.920368843 +0000 UTC m=+435.916306230" lastFinishedPulling="2026-02-26 08:20:01.761139608 +0000 UTC m=+436.757077005" observedRunningTime="2026-02-26 08:20:02.281258786 +0000 UTC m=+437.277196193" watchObservedRunningTime="2026-02-26 08:20:02.283433968 +0000 UTC m=+437.279371355" Feb 26 08:20:03 crc kubenswrapper[4741]: I0226 08:20:03.278522 4741 generic.go:334] "Generic (PLEG): container finished" podID="9ba7b413-c1cc-42a1-82d0-7b60ef85568c" containerID="1b7e54b68fbd0f7bdc17500e3e42ece215d1ef3b9c8e6b5fa24cdb261d2bd0eb" exitCode=0 Feb 26 08:20:03 crc kubenswrapper[4741]: I0226 08:20:03.278589 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" event={"ID":"9ba7b413-c1cc-42a1-82d0-7b60ef85568c","Type":"ContainerDied","Data":"1b7e54b68fbd0f7bdc17500e3e42ece215d1ef3b9c8e6b5fa24cdb261d2bd0eb"} Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:04.605910 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:04.650335 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmb9d\" (UniqueName: \"kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d\") pod \"9ba7b413-c1cc-42a1-82d0-7b60ef85568c\" (UID: \"9ba7b413-c1cc-42a1-82d0-7b60ef85568c\") " Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:04.658045 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d" (OuterVolumeSpecName: "kube-api-access-mmb9d") pod "9ba7b413-c1cc-42a1-82d0-7b60ef85568c" (UID: "9ba7b413-c1cc-42a1-82d0-7b60ef85568c"). InnerVolumeSpecName "kube-api-access-mmb9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:04.752331 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmb9d\" (UniqueName: \"kubernetes.io/projected/9ba7b413-c1cc-42a1-82d0-7b60ef85568c-kube-api-access-mmb9d\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:05.292522 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" event={"ID":"9ba7b413-c1cc-42a1-82d0-7b60ef85568c","Type":"ContainerDied","Data":"91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c"} Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:05.292587 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534900-dbfxr" Feb 26 08:20:05 crc kubenswrapper[4741]: I0226 08:20:05.293365 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ea8618ab461fbe7d728d22e844394377ef4afb8c745b170dee063ddf693c2c" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.302823 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.304215 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rfttb" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="registry-server" containerID="cri-o://47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113" gracePeriod=30 Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.311393 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.311661 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mwjzl" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="registry-server" containerID="cri-o://bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962" gracePeriod=30 Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.330386 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.330631 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" containerID="cri-o://582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598" gracePeriod=30 Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.345564 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.345854 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bfvsg" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="registry-server" containerID="cri-o://2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82" gracePeriod=30 Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.363184 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.363448 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kss68" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="registry-server" containerID="cri-o://57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea" gracePeriod=30 Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.365377 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdzlz"] Feb 26 08:20:07 crc kubenswrapper[4741]: E0226 08:20:07.365666 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba7b413-c1cc-42a1-82d0-7b60ef85568c" containerName="oc" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.365681 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba7b413-c1cc-42a1-82d0-7b60ef85568c" containerName="oc" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.365800 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba7b413-c1cc-42a1-82d0-7b60ef85568c" containerName="oc" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.366232 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.377085 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdzlz"] Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.391460 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwgk\" (UniqueName: \"kubernetes.io/projected/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-kube-api-access-bmwgk\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.391646 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.391718 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.492765 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwgk\" (UniqueName: \"kubernetes.io/projected/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-kube-api-access-bmwgk\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.492827 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.492884 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.494508 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.501478 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.511917 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwgk\" (UniqueName: \"kubernetes.io/projected/dc5a16f1-f482-4a9f-81f0-b21fa200d4da-kube-api-access-bmwgk\") pod \"marketplace-operator-79b997595-tdzlz\" (UID: \"dc5a16f1-f482-4a9f-81f0-b21fa200d4da\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.831254 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.834977 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.839830 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.856966 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.901882 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902423 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics\") pod \"15f99982-b491-4a49-8fb9-f6355b956e11\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902459 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca\") pod \"15f99982-b491-4a49-8fb9-f6355b956e11\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902495 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities\") pod \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902514 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thzxz\" (UniqueName: \"kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz\") pod \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902599 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content\") pod \"ba490082-d248-4d24-86ea-a812f638c6f7\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902628 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content\") pod \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\" (UID: \"326d4c0d-4365-4ae3-b9b3-8abf324c80e4\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902668 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8fnx\" (UniqueName: \"kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx\") pod \"ba490082-d248-4d24-86ea-a812f638c6f7\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902712 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities\") pod \"ba490082-d248-4d24-86ea-a812f638c6f7\" (UID: \"ba490082-d248-4d24-86ea-a812f638c6f7\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.902734 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j59h4\" (UniqueName: \"kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4\") pod \"15f99982-b491-4a49-8fb9-f6355b956e11\" (UID: \"15f99982-b491-4a49-8fb9-f6355b956e11\") " Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.905165 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities" (OuterVolumeSpecName: "utilities") pod "326d4c0d-4365-4ae3-b9b3-8abf324c80e4" (UID: "326d4c0d-4365-4ae3-b9b3-8abf324c80e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.905958 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "15f99982-b491-4a49-8fb9-f6355b956e11" (UID: "15f99982-b491-4a49-8fb9-f6355b956e11"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.920509 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx" (OuterVolumeSpecName: "kube-api-access-v8fnx") pod "ba490082-d248-4d24-86ea-a812f638c6f7" (UID: "ba490082-d248-4d24-86ea-a812f638c6f7"). InnerVolumeSpecName "kube-api-access-v8fnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.926062 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4" (OuterVolumeSpecName: "kube-api-access-j59h4") pod "15f99982-b491-4a49-8fb9-f6355b956e11" (UID: "15f99982-b491-4a49-8fb9-f6355b956e11"). InnerVolumeSpecName "kube-api-access-j59h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.929834 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz" (OuterVolumeSpecName: "kube-api-access-thzxz") pod "326d4c0d-4365-4ae3-b9b3-8abf324c80e4" (UID: "326d4c0d-4365-4ae3-b9b3-8abf324c80e4"). InnerVolumeSpecName "kube-api-access-thzxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.931259 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities" (OuterVolumeSpecName: "utilities") pod "ba490082-d248-4d24-86ea-a812f638c6f7" (UID: "ba490082-d248-4d24-86ea-a812f638c6f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.939849 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.955681 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "15f99982-b491-4a49-8fb9-f6355b956e11" (UID: "15f99982-b491-4a49-8fb9-f6355b956e11"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:20:07 crc kubenswrapper[4741]: I0226 08:20:07.968896 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba490082-d248-4d24-86ea-a812f638c6f7" (UID: "ba490082-d248-4d24-86ea-a812f638c6f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.003805 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5slrm\" (UniqueName: \"kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm\") pod \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.003872 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content\") pod \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.003905 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities\") pod \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.003928 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tsmp\" (UniqueName: \"kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp\") pod \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\" (UID: \"769f8af2-a3e7-4d89-a15d-a81b50d12bc4\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004026 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content\") pod \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004083 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities\") pod \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\" (UID: \"d9461f6e-32f2-46cd-b0be-71ae66fdb20e\") " Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004410 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004430 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thzxz\" (UniqueName: \"kubernetes.io/projected/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-kube-api-access-thzxz\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004444 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004456 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8fnx\" (UniqueName: \"kubernetes.io/projected/ba490082-d248-4d24-86ea-a812f638c6f7-kube-api-access-v8fnx\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004467 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba490082-d248-4d24-86ea-a812f638c6f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004480 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j59h4\" (UniqueName: \"kubernetes.io/projected/15f99982-b491-4a49-8fb9-f6355b956e11-kube-api-access-j59h4\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004492 4741 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.004505 4741 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15f99982-b491-4a49-8fb9-f6355b956e11-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.005006 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities" (OuterVolumeSpecName: "utilities") pod "769f8af2-a3e7-4d89-a15d-a81b50d12bc4" (UID: "769f8af2-a3e7-4d89-a15d-a81b50d12bc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.005913 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities" (OuterVolumeSpecName: "utilities") pod "d9461f6e-32f2-46cd-b0be-71ae66fdb20e" (UID: "d9461f6e-32f2-46cd-b0be-71ae66fdb20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.006549 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm" (OuterVolumeSpecName: "kube-api-access-5slrm") pod "d9461f6e-32f2-46cd-b0be-71ae66fdb20e" (UID: "d9461f6e-32f2-46cd-b0be-71ae66fdb20e"). InnerVolumeSpecName "kube-api-access-5slrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.008541 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "326d4c0d-4365-4ae3-b9b3-8abf324c80e4" (UID: "326d4c0d-4365-4ae3-b9b3-8abf324c80e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.009369 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp" (OuterVolumeSpecName: "kube-api-access-6tsmp") pod "769f8af2-a3e7-4d89-a15d-a81b50d12bc4" (UID: "769f8af2-a3e7-4d89-a15d-a81b50d12bc4"). InnerVolumeSpecName "kube-api-access-6tsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.057343 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "769f8af2-a3e7-4d89-a15d-a81b50d12bc4" (UID: "769f8af2-a3e7-4d89-a15d-a81b50d12bc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108281 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326d4c0d-4365-4ae3-b9b3-8abf324c80e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108322 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108334 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5slrm\" (UniqueName: \"kubernetes.io/projected/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-kube-api-access-5slrm\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108345 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108353 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.108361 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tsmp\" (UniqueName: \"kubernetes.io/projected/769f8af2-a3e7-4d89-a15d-a81b50d12bc4-kube-api-access-6tsmp\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.131141 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9461f6e-32f2-46cd-b0be-71ae66fdb20e" (UID: "d9461f6e-32f2-46cd-b0be-71ae66fdb20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.210148 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9461f6e-32f2-46cd-b0be-71ae66fdb20e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.316007 4741 generic.go:334] "Generic (PLEG): container finished" podID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerID="47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113" exitCode=0 Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.316096 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerDied","Data":"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.316148 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfttb" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.316175 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfttb" event={"ID":"769f8af2-a3e7-4d89-a15d-a81b50d12bc4","Type":"ContainerDied","Data":"6d6879f41b74ebfa46b6cc954494a8f6132196abde1a3e994409a806c9fbe9c5"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.316205 4741 scope.go:117] "RemoveContainer" containerID="47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.318239 4741 generic.go:334] "Generic (PLEG): container finished" podID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerID="bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962" exitCode=0 Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.318280 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerDied","Data":"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.318311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjzl" event={"ID":"326d4c0d-4365-4ae3-b9b3-8abf324c80e4","Type":"ContainerDied","Data":"e9802ca3ab40bfc76bc9181ef09beddf252d81f3628597164ef1e10e412f9f99"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.318376 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjzl" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.322894 4741 generic.go:334] "Generic (PLEG): container finished" podID="15f99982-b491-4a49-8fb9-f6355b956e11" containerID="582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598" exitCode=0 Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.322940 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" event={"ID":"15f99982-b491-4a49-8fb9-f6355b956e11","Type":"ContainerDied","Data":"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.322962 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" event={"ID":"15f99982-b491-4a49-8fb9-f6355b956e11","Type":"ContainerDied","Data":"a39d05ab2144a9536eb53ee9bd6c161de404f8b6430a0f00a2c9b28bfc43527b"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.323002 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtbm" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.340820 4741 scope.go:117] "RemoveContainer" containerID="3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.357040 4741 generic.go:334] "Generic (PLEG): container finished" podID="ba490082-d248-4d24-86ea-a812f638c6f7" containerID="2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82" exitCode=0 Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.357238 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfvsg" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.357871 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerDied","Data":"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.357951 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfvsg" event={"ID":"ba490082-d248-4d24-86ea-a812f638c6f7","Type":"ContainerDied","Data":"34715091292a6bfc5e94a8e78ee62bfd289d837d1a892b1063f3dde6cb0b9ec5"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.365049 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.374103 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtbm"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.397246 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerID="57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea" exitCode=0 Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.397311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerDied","Data":"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.397343 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kss68" event={"ID":"d9461f6e-32f2-46cd-b0be-71ae66fdb20e","Type":"ContainerDied","Data":"be599bc485a953ece114b46b759f43a399b20f004d3f76438dafd1500b7c2a49"} Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.397593 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kss68" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.403315 4741 scope.go:117] "RemoveContainer" containerID="8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.404491 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.420571 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rfttb"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.425048 4741 scope.go:117] "RemoveContainer" containerID="47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.426988 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113\": container with ID starting with 47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113 not found: ID does not exist" containerID="47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.427037 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113"} err="failed to get container status \"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113\": rpc error: code = NotFound desc = could not find container \"47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113\": container with ID starting with 47770db953472300f382d59117bcca65f05a1954922a62ad02c3e8a72d5ce113 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.427087 4741 scope.go:117] "RemoveContainer" containerID="3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.427570 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e\": container with ID starting with 3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e not found: ID does not exist" containerID="3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.427604 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e"} err="failed to get container status \"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e\": rpc error: code = NotFound desc = could not find container \"3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e\": container with ID starting with 3e5b8272bf00ab8a628f1ceb3ea3025791b5e874e3fbf84f188bc6d7df934b5e not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.427629 4741 scope.go:117] "RemoveContainer" containerID="8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.427952 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9\": container with ID starting with 8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9 not found: ID does not exist" containerID="8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.427990 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9"} err="failed to get container status \"8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9\": rpc error: code = NotFound desc = could not find container \"8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9\": container with ID starting with 8b5463a2a56edb88cb747172fa234f861c4bc76c7c450256a92b25dd3b81dfd9 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.428015 4741 scope.go:117] "RemoveContainer" containerID="bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.444008 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdzlz"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.448650 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.452536 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mwjzl"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.455542 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.458262 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfvsg"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.513974 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.517269 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kss68"] Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.522364 4741 scope.go:117] "RemoveContainer" containerID="915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.541005 4741 scope.go:117] "RemoveContainer" containerID="a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.579706 4741 scope.go:117] "RemoveContainer" containerID="bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.580365 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962\": container with ID starting with bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962 not found: ID does not exist" containerID="bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.580402 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962"} err="failed to get container status \"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962\": rpc error: code = NotFound desc = could not find container \"bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962\": container with ID starting with bed446f3e6a470a1e815a9364053b71621dde32f02ef12b029f559fe6f782962 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.580432 4741 scope.go:117] "RemoveContainer" containerID="915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.580845 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7\": container with ID starting with 915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7 not found: ID does not exist" containerID="915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.580875 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7"} err="failed to get container status \"915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7\": rpc error: code = NotFound desc = could not find container \"915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7\": container with ID starting with 915668c246b3478a49269f16717fa04d1a1c8c8ec483f33c23c96cd97ea537b7 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.580895 4741 scope.go:117] "RemoveContainer" containerID="a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.581386 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf\": container with ID starting with a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf not found: ID does not exist" containerID="a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.581495 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf"} err="failed to get container status \"a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf\": rpc error: code = NotFound desc = could not find container \"a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf\": container with ID starting with a809cce12c2092dcea949a57dbf86b577e9e5c3b6538ce949254a97c951b33cf not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.581584 4741 scope.go:117] "RemoveContainer" containerID="582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.605388 4741 scope.go:117] "RemoveContainer" containerID="582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.606215 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598\": container with ID starting with 582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598 not found: ID does not exist" containerID="582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.606293 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598"} err="failed to get container status \"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598\": rpc error: code = NotFound desc = could not find container \"582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598\": container with ID starting with 582052556e5704ffa79a43f94c221819faead3a2b623a473e734ca0fa8fd1598 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.606349 4741 scope.go:117] "RemoveContainer" containerID="2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.624800 4741 scope.go:117] "RemoveContainer" containerID="6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.646736 4741 scope.go:117] "RemoveContainer" containerID="8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.667705 4741 scope.go:117] "RemoveContainer" containerID="2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.668462 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82\": container with ID starting with 2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82 not found: ID does not exist" containerID="2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.668596 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82"} err="failed to get container status \"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82\": rpc error: code = NotFound desc = could not find container \"2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82\": container with ID starting with 2631541cfe52d7d9059e85b2093f393ed60cd749a6542d88903abf7512710c82 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.668727 4741 scope.go:117] "RemoveContainer" containerID="6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.669156 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8\": container with ID starting with 6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8 not found: ID does not exist" containerID="6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.669327 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8"} err="failed to get container status \"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8\": rpc error: code = NotFound desc = could not find container \"6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8\": container with ID starting with 6762f806a4f6c8620418be330e7397d0461b6701f834201f0ba5e7c09edc82c8 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.669438 4741 scope.go:117] "RemoveContainer" containerID="8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.669831 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07\": container with ID starting with 8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07 not found: ID does not exist" containerID="8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.669980 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07"} err="failed to get container status \"8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07\": rpc error: code = NotFound desc = could not find container \"8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07\": container with ID starting with 8c7f3e565d8804b9db94bf8135c6ee566f866ab985f9338ba33a954f970a0d07 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.670096 4741 scope.go:117] "RemoveContainer" containerID="57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.687336 4741 scope.go:117] "RemoveContainer" containerID="ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.706199 4741 scope.go:117] "RemoveContainer" containerID="c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.721560 4741 scope.go:117] "RemoveContainer" containerID="57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.722045 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea\": container with ID starting with 57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea not found: ID does not exist" containerID="57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.722077 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea"} err="failed to get container status \"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea\": rpc error: code = NotFound desc = could not find container \"57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea\": container with ID starting with 57c6902ac83e2290a6641c0aeedffa155e6ef71e9ca9e6622b3bcf6040d2f9ea not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.722103 4741 scope.go:117] "RemoveContainer" containerID="ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.722511 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445\": container with ID starting with ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445 not found: ID does not exist" containerID="ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.722531 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445"} err="failed to get container status \"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445\": rpc error: code = NotFound desc = could not find container \"ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445\": container with ID starting with ce1a2c3aae618a7175937285bd9ffeedcbd8dd8e5bf4599afc442674e80fe445 not found: ID does not exist" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.722550 4741 scope.go:117] "RemoveContainer" containerID="c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30" Feb 26 08:20:08 crc kubenswrapper[4741]: E0226 08:20:08.724122 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30\": container with ID starting with c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30 not found: ID does not exist" containerID="c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30" Feb 26 08:20:08 crc kubenswrapper[4741]: I0226 08:20:08.724309 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30"} err="failed to get container status \"c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30\": rpc error: code = NotFound desc = could not find container \"c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30\": container with ID starting with c713ba46e508de5fd6bfaa936a4af0165a474a327e67d1371fc166547e09ae30 not found: ID does not exist" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.404505 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" event={"ID":"dc5a16f1-f482-4a9f-81f0-b21fa200d4da","Type":"ContainerStarted","Data":"98ae89a8a55df471f2704795e6f96c9b0fb34002b190097d0378eb39828672e1"} Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.405100 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" event={"ID":"dc5a16f1-f482-4a9f-81f0-b21fa200d4da","Type":"ContainerStarted","Data":"c82e14a2256882423d39973a75a9c74190fabff104fc77adf714a004b68958f2"} Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.409572 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.411738 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.448465 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" podStartSLOduration=2.4484378319999998 podStartE2EDuration="2.448437832s" podCreationTimestamp="2026-02-26 08:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:20:09.432374711 +0000 UTC m=+444.428312108" watchObservedRunningTime="2026-02-26 08:20:09.448437832 +0000 UTC m=+444.444375229" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.796265 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" path="/var/lib/kubelet/pods/15f99982-b491-4a49-8fb9-f6355b956e11/volumes" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.797245 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" path="/var/lib/kubelet/pods/326d4c0d-4365-4ae3-b9b3-8abf324c80e4/volumes" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.798406 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" path="/var/lib/kubelet/pods/769f8af2-a3e7-4d89-a15d-a81b50d12bc4/volumes" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.800387 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" path="/var/lib/kubelet/pods/ba490082-d248-4d24-86ea-a812f638c6f7/volumes" Feb 26 08:20:09 crc kubenswrapper[4741]: I0226 08:20:09.801541 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" path="/var/lib/kubelet/pods/d9461f6e-32f2-46cd-b0be-71ae66fdb20e/volumes" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097608 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2vcnd"] Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097882 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097900 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097915 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097923 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097933 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097943 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097954 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097962 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097975 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.097983 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.097997 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098005 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098016 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098024 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098033 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098041 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098053 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098063 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098079 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098088 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098100 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098133 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="extract-content" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098147 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098157 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="extract-utilities" Feb 26 08:20:10 crc kubenswrapper[4741]: E0226 08:20:10.098169 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098176 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098314 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f99982-b491-4a49-8fb9-f6355b956e11" containerName="marketplace-operator" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098330 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="326d4c0d-4365-4ae3-b9b3-8abf324c80e4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098338 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9461f6e-32f2-46cd-b0be-71ae66fdb20e" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098353 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="769f8af2-a3e7-4d89-a15d-a81b50d12bc4" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.098364 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba490082-d248-4d24-86ea-a812f638c6f7" containerName="registry-server" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.099228 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.101597 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.114646 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rp8\" (UniqueName: \"kubernetes.io/projected/93b4b5c9-a048-4219-86a9-ef1ff11cc024-kube-api-access-62rp8\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.114715 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-catalog-content\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.114774 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-utilities\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.116651 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vcnd"] Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.215486 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-utilities\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.215544 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62rp8\" (UniqueName: \"kubernetes.io/projected/93b4b5c9-a048-4219-86a9-ef1ff11cc024-kube-api-access-62rp8\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.215581 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-catalog-content\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.216285 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-utilities\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.216297 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93b4b5c9-a048-4219-86a9-ef1ff11cc024-catalog-content\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.238161 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62rp8\" (UniqueName: \"kubernetes.io/projected/93b4b5c9-a048-4219-86a9-ef1ff11cc024-kube-api-access-62rp8\") pod \"certified-operators-2vcnd\" (UID: \"93b4b5c9-a048-4219-86a9-ef1ff11cc024\") " pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.303582 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mscg4"] Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.306869 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.310777 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.313184 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mscg4"] Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.317483 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-utilities\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.317548 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-catalog-content\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.317602 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpbww\" (UniqueName: \"kubernetes.io/projected/cfb44914-3bd9-4c8c-937b-cccc55045fc6-kube-api-access-cpbww\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.418501 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.422310 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-utilities\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.422350 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-catalog-content\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.422385 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpbww\" (UniqueName: \"kubernetes.io/projected/cfb44914-3bd9-4c8c-937b-cccc55045fc6-kube-api-access-cpbww\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.423328 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-utilities\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.423526 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb44914-3bd9-4c8c-937b-cccc55045fc6-catalog-content\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.453580 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpbww\" (UniqueName: \"kubernetes.io/projected/cfb44914-3bd9-4c8c-937b-cccc55045fc6-kube-api-access-cpbww\") pod \"community-operators-mscg4\" (UID: \"cfb44914-3bd9-4c8c-937b-cccc55045fc6\") " pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.624909 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vcnd"] Feb 26 08:20:10 crc kubenswrapper[4741]: I0226 08:20:10.625425 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:10 crc kubenswrapper[4741]: W0226 08:20:10.636210 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b4b5c9_a048_4219_86a9_ef1ff11cc024.slice/crio-43162bac74b3488fd0cca3ac43076253ce31766d7060d97959b5baf38e179382 WatchSource:0}: Error finding container 43162bac74b3488fd0cca3ac43076253ce31766d7060d97959b5baf38e179382: Status 404 returned error can't find the container with id 43162bac74b3488fd0cca3ac43076253ce31766d7060d97959b5baf38e179382 Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.058938 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mscg4"] Feb 26 08:20:11 crc kubenswrapper[4741]: W0226 08:20:11.067281 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfb44914_3bd9_4c8c_937b_cccc55045fc6.slice/crio-ef73eb5c0f20ffde2dc83e77a05803aca4d158987c5409c7f41e5a6d8579c5df WatchSource:0}: Error finding container ef73eb5c0f20ffde2dc83e77a05803aca4d158987c5409c7f41e5a6d8579c5df: Status 404 returned error can't find the container with id ef73eb5c0f20ffde2dc83e77a05803aca4d158987c5409c7f41e5a6d8579c5df Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.436963 4741 generic.go:334] "Generic (PLEG): container finished" podID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerID="dc62283adc4c4fcd430ad914343e7ce21187cc3ba445e84d071563a18296957d" exitCode=0 Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.437061 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mscg4" event={"ID":"cfb44914-3bd9-4c8c-937b-cccc55045fc6","Type":"ContainerDied","Data":"dc62283adc4c4fcd430ad914343e7ce21187cc3ba445e84d071563a18296957d"} Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.437157 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mscg4" event={"ID":"cfb44914-3bd9-4c8c-937b-cccc55045fc6","Type":"ContainerStarted","Data":"ef73eb5c0f20ffde2dc83e77a05803aca4d158987c5409c7f41e5a6d8579c5df"} Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.439005 4741 generic.go:334] "Generic (PLEG): container finished" podID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerID="e4a12ae1a6b043bca3787fcbb6774eacc042135fc17086a6f32fd609f11d06fa" exitCode=0 Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.439194 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vcnd" event={"ID":"93b4b5c9-a048-4219-86a9-ef1ff11cc024","Type":"ContainerDied","Data":"e4a12ae1a6b043bca3787fcbb6774eacc042135fc17086a6f32fd609f11d06fa"} Feb 26 08:20:11 crc kubenswrapper[4741]: I0226 08:20:11.439255 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vcnd" event={"ID":"93b4b5c9-a048-4219-86a9-ef1ff11cc024","Type":"ContainerStarted","Data":"43162bac74b3488fd0cca3ac43076253ce31766d7060d97959b5baf38e179382"} Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.504983 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzxmn"] Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.506201 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.508870 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.518280 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzxmn"] Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.652299 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vd9\" (UniqueName: \"kubernetes.io/projected/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-kube-api-access-r7vd9\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.652517 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-catalog-content\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.652571 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-utilities\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.704429 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28q5n"] Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.705471 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.709049 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.710609 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28q5n"] Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.753978 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfdrl\" (UniqueName: \"kubernetes.io/projected/c71842fc-fda8-481f-96d6-64b811178a92-kube-api-access-gfdrl\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-utilities\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754102 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vd9\" (UniqueName: \"kubernetes.io/projected/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-kube-api-access-r7vd9\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754155 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-catalog-content\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754185 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-utilities\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754210 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-catalog-content\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754750 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-utilities\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.754853 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-catalog-content\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.776567 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vd9\" (UniqueName: \"kubernetes.io/projected/4ddcb17f-6b4a-4194-aab9-e24dc49c75e0-kube-api-access-r7vd9\") pod \"redhat-marketplace-fzxmn\" (UID: \"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0\") " pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.832192 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.855208 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-catalog-content\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.855290 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-utilities\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.855328 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfdrl\" (UniqueName: \"kubernetes.io/projected/c71842fc-fda8-481f-96d6-64b811178a92-kube-api-access-gfdrl\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.856544 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-utilities\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.857051 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71842fc-fda8-481f-96d6-64b811178a92-catalog-content\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:12 crc kubenswrapper[4741]: I0226 08:20:12.884377 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfdrl\" (UniqueName: \"kubernetes.io/projected/c71842fc-fda8-481f-96d6-64b811178a92-kube-api-access-gfdrl\") pod \"redhat-operators-28q5n\" (UID: \"c71842fc-fda8-481f-96d6-64b811178a92\") " pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.026224 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.053776 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzxmn"] Feb 26 08:20:13 crc kubenswrapper[4741]: W0226 08:20:13.064390 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ddcb17f_6b4a_4194_aab9_e24dc49c75e0.slice/crio-5788b457c1f2f27e950ea264ef5bd5afa114c8c1a28c8d87d12610e214868666 WatchSource:0}: Error finding container 5788b457c1f2f27e950ea264ef5bd5afa114c8c1a28c8d87d12610e214868666: Status 404 returned error can't find the container with id 5788b457c1f2f27e950ea264ef5bd5afa114c8c1a28c8d87d12610e214868666 Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.454536 4741 generic.go:334] "Generic (PLEG): container finished" podID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerID="e9cfea7665ec00c2f6ba84c0f10b8656dcd6352324a0621bcd7dadb29d6f6b93" exitCode=0 Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.454606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vcnd" event={"ID":"93b4b5c9-a048-4219-86a9-ef1ff11cc024","Type":"ContainerDied","Data":"e9cfea7665ec00c2f6ba84c0f10b8656dcd6352324a0621bcd7dadb29d6f6b93"} Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.457755 4741 generic.go:334] "Generic (PLEG): container finished" podID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerID="f2d757a10d83d55edfa2d5b55e4ac7e580c6c3c73f77de16f86bf3eba356cb21" exitCode=0 Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.457866 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mscg4" event={"ID":"cfb44914-3bd9-4c8c-937b-cccc55045fc6","Type":"ContainerDied","Data":"f2d757a10d83d55edfa2d5b55e4ac7e580c6c3c73f77de16f86bf3eba356cb21"} Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.464900 4741 generic.go:334] "Generic (PLEG): container finished" podID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerID="de6adbca77908b66b81c83d1a400f51e27ddcf00a143ad0f1e050dbb3fb6579e" exitCode=0 Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.464971 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzxmn" event={"ID":"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0","Type":"ContainerDied","Data":"de6adbca77908b66b81c83d1a400f51e27ddcf00a143ad0f1e050dbb3fb6579e"} Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.465013 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzxmn" event={"ID":"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0","Type":"ContainerStarted","Data":"5788b457c1f2f27e950ea264ef5bd5afa114c8c1a28c8d87d12610e214868666"} Feb 26 08:20:13 crc kubenswrapper[4741]: I0226 08:20:13.508732 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28q5n"] Feb 26 08:20:13 crc kubenswrapper[4741]: W0226 08:20:13.513676 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc71842fc_fda8_481f_96d6_64b811178a92.slice/crio-d60b7ed118fe0524653e7b26cdfe9937e8ef03efa4a82c8f408c88b2aa459f05 WatchSource:0}: Error finding container d60b7ed118fe0524653e7b26cdfe9937e8ef03efa4a82c8f408c88b2aa459f05: Status 404 returned error can't find the container with id d60b7ed118fe0524653e7b26cdfe9937e8ef03efa4a82c8f408c88b2aa459f05 Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.473767 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mscg4" event={"ID":"cfb44914-3bd9-4c8c-937b-cccc55045fc6","Type":"ContainerStarted","Data":"9c688069aa322a7bc964755fa4fc63f5776991d32d5aebfaa4864ea3ca2107be"} Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.475996 4741 generic.go:334] "Generic (PLEG): container finished" podID="c71842fc-fda8-481f-96d6-64b811178a92" containerID="23c9111e6c5f436928a107ab3cdb8fd48f6a667bf35527b108d3643b6f3d2265" exitCode=0 Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.476088 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28q5n" event={"ID":"c71842fc-fda8-481f-96d6-64b811178a92","Type":"ContainerDied","Data":"23c9111e6c5f436928a107ab3cdb8fd48f6a667bf35527b108d3643b6f3d2265"} Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.476130 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28q5n" event={"ID":"c71842fc-fda8-481f-96d6-64b811178a92","Type":"ContainerStarted","Data":"d60b7ed118fe0524653e7b26cdfe9937e8ef03efa4a82c8f408c88b2aa459f05"} Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.479249 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vcnd" event={"ID":"93b4b5c9-a048-4219-86a9-ef1ff11cc024","Type":"ContainerStarted","Data":"a46dfced86b3cacd4035c71a3854c47913fbc86fbe47085c14eabe45ea334138"} Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.492319 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mscg4" podStartSLOduration=2.060426789 podStartE2EDuration="4.492296328s" podCreationTimestamp="2026-02-26 08:20:10 +0000 UTC" firstStartedPulling="2026-02-26 08:20:11.43884522 +0000 UTC m=+446.434782607" lastFinishedPulling="2026-02-26 08:20:13.870714759 +0000 UTC m=+448.866652146" observedRunningTime="2026-02-26 08:20:14.492223766 +0000 UTC m=+449.488161153" watchObservedRunningTime="2026-02-26 08:20:14.492296328 +0000 UTC m=+449.488233715" Feb 26 08:20:14 crc kubenswrapper[4741]: I0226 08:20:14.512563 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2vcnd" podStartSLOduration=2.039352274 podStartE2EDuration="4.512538538s" podCreationTimestamp="2026-02-26 08:20:10 +0000 UTC" firstStartedPulling="2026-02-26 08:20:11.440750985 +0000 UTC m=+446.436688402" lastFinishedPulling="2026-02-26 08:20:13.913937279 +0000 UTC m=+448.909874666" observedRunningTime="2026-02-26 08:20:14.508167153 +0000 UTC m=+449.504104540" watchObservedRunningTime="2026-02-26 08:20:14.512538538 +0000 UTC m=+449.508475925" Feb 26 08:20:15 crc kubenswrapper[4741]: I0226 08:20:15.486278 4741 generic.go:334] "Generic (PLEG): container finished" podID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerID="5ecb0d3caa2e9cb8016382eb40d16eb6b9b74d9a326745ef9d76f1d9f91a17e0" exitCode=0 Feb 26 08:20:15 crc kubenswrapper[4741]: I0226 08:20:15.486339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzxmn" event={"ID":"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0","Type":"ContainerDied","Data":"5ecb0d3caa2e9cb8016382eb40d16eb6b9b74d9a326745ef9d76f1d9f91a17e0"} Feb 26 08:20:16 crc kubenswrapper[4741]: I0226 08:20:16.495033 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzxmn" event={"ID":"4ddcb17f-6b4a-4194-aab9-e24dc49c75e0","Type":"ContainerStarted","Data":"718db6d1902250137218823d606259dd76c73bff06094368b925b853c63cf36a"} Feb 26 08:20:16 crc kubenswrapper[4741]: I0226 08:20:16.499047 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28q5n" event={"ID":"c71842fc-fda8-481f-96d6-64b811178a92","Type":"ContainerStarted","Data":"4bdc2fb2a3f0d003bb86f5ebdfdff918ca61024cd45c82800bf1309e61bbf41e"} Feb 26 08:20:16 crc kubenswrapper[4741]: I0226 08:20:16.512894 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzxmn" podStartSLOduration=2.005077692 podStartE2EDuration="4.512861651s" podCreationTimestamp="2026-02-26 08:20:12 +0000 UTC" firstStartedPulling="2026-02-26 08:20:13.467081932 +0000 UTC m=+448.463019319" lastFinishedPulling="2026-02-26 08:20:15.974865901 +0000 UTC m=+450.970803278" observedRunningTime="2026-02-26 08:20:16.510914525 +0000 UTC m=+451.506851922" watchObservedRunningTime="2026-02-26 08:20:16.512861651 +0000 UTC m=+451.508799058" Feb 26 08:20:17 crc kubenswrapper[4741]: I0226 08:20:17.508328 4741 generic.go:334] "Generic (PLEG): container finished" podID="c71842fc-fda8-481f-96d6-64b811178a92" containerID="4bdc2fb2a3f0d003bb86f5ebdfdff918ca61024cd45c82800bf1309e61bbf41e" exitCode=0 Feb 26 08:20:17 crc kubenswrapper[4741]: I0226 08:20:17.508391 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28q5n" event={"ID":"c71842fc-fda8-481f-96d6-64b811178a92","Type":"ContainerDied","Data":"4bdc2fb2a3f0d003bb86f5ebdfdff918ca61024cd45c82800bf1309e61bbf41e"} Feb 26 08:20:18 crc kubenswrapper[4741]: I0226 08:20:18.517958 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28q5n" event={"ID":"c71842fc-fda8-481f-96d6-64b811178a92","Type":"ContainerStarted","Data":"cdd07f0f79be867cb248ed01b76e8011f3ccdc76018cde8a4171afd74ac15df2"} Feb 26 08:20:18 crc kubenswrapper[4741]: I0226 08:20:18.540236 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28q5n" podStartSLOduration=2.961380603 podStartE2EDuration="6.540217119s" podCreationTimestamp="2026-02-26 08:20:12 +0000 UTC" firstStartedPulling="2026-02-26 08:20:14.481420786 +0000 UTC m=+449.477358173" lastFinishedPulling="2026-02-26 08:20:18.060257302 +0000 UTC m=+453.056194689" observedRunningTime="2026-02-26 08:20:18.536542683 +0000 UTC m=+453.532480080" watchObservedRunningTime="2026-02-26 08:20:18.540217119 +0000 UTC m=+453.536154506" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.419737 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.419825 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.486840 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.580362 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2vcnd" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.626765 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.627136 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:20 crc kubenswrapper[4741]: I0226 08:20:20.666344 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:21 crc kubenswrapper[4741]: I0226 08:20:21.600331 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mscg4" Feb 26 08:20:22 crc kubenswrapper[4741]: I0226 08:20:22.835731 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:22 crc kubenswrapper[4741]: I0226 08:20:22.836152 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:22 crc kubenswrapper[4741]: I0226 08:20:22.886965 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:23 crc kubenswrapper[4741]: I0226 08:20:23.027983 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:23 crc kubenswrapper[4741]: I0226 08:20:23.028070 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:23 crc kubenswrapper[4741]: I0226 08:20:23.594649 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzxmn" Feb 26 08:20:24 crc kubenswrapper[4741]: I0226 08:20:24.065229 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-28q5n" podUID="c71842fc-fda8-481f-96d6-64b811178a92" containerName="registry-server" probeResult="failure" output=< Feb 26 08:20:24 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:20:24 crc kubenswrapper[4741]: > Feb 26 08:20:24 crc kubenswrapper[4741]: I0226 08:20:24.515865 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" podUID="ee4578f5-1608-403f-9132-5613a1b3a105" containerName="registry" containerID="cri-o://ea2448748a704a0a54d6bd5f21098d4549edf690e27449cf060f9c2dd3f0f6aa" gracePeriod=30 Feb 26 08:20:25 crc kubenswrapper[4741]: I0226 08:20:25.149232 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:20:25 crc kubenswrapper[4741]: I0226 08:20:25.149777 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:20:25 crc kubenswrapper[4741]: I0226 08:20:25.149895 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:20:25 crc kubenswrapper[4741]: I0226 08:20:25.150502 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:20:25 crc kubenswrapper[4741]: I0226 08:20:25.150628 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22" gracePeriod=600 Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.565337 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22" exitCode=0 Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.565988 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22"} Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.566039 4741 scope.go:117] "RemoveContainer" containerID="74fd34539dad1b3e581137821da26ef66ef2001d180d610380669b7356b75b76" Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.570904 4741 generic.go:334] "Generic (PLEG): container finished" podID="ee4578f5-1608-403f-9132-5613a1b3a105" containerID="ea2448748a704a0a54d6bd5f21098d4549edf690e27449cf060f9c2dd3f0f6aa" exitCode=0 Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.570931 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" event={"ID":"ee4578f5-1608-403f-9132-5613a1b3a105","Type":"ContainerDied","Data":"ea2448748a704a0a54d6bd5f21098d4549edf690e27449cf060f9c2dd3f0f6aa"} Feb 26 08:20:26 crc kubenswrapper[4741]: I0226 08:20:26.923950 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.083047 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084574 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084657 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084784 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084811 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084828 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084866 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bh54\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.084899 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca\") pod \"ee4578f5-1608-403f-9132-5613a1b3a105\" (UID: \"ee4578f5-1608-403f-9132-5613a1b3a105\") " Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.087008 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.091633 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.093904 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.098101 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54" (OuterVolumeSpecName: "kube-api-access-6bh54") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "kube-api-access-6bh54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.100551 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.111697 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.115961 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.119690 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ee4578f5-1608-403f-9132-5613a1b3a105" (UID: "ee4578f5-1608-403f-9132-5613a1b3a105"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187298 4741 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187343 4741 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187353 4741 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee4578f5-1608-403f-9132-5613a1b3a105-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187362 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bh54\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-kube-api-access-6bh54\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187372 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee4578f5-1608-403f-9132-5613a1b3a105-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187384 4741 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee4578f5-1608-403f-9132-5613a1b3a105-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.187393 4741 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee4578f5-1608-403f-9132-5613a1b3a105-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.579469 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" event={"ID":"ee4578f5-1608-403f-9132-5613a1b3a105","Type":"ContainerDied","Data":"f94cf85e3fd8032392a27f80d855ca410562aa5b133514849454f957cc185ebe"} Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.579538 4741 scope.go:117] "RemoveContainer" containerID="ea2448748a704a0a54d6bd5f21098d4549edf690e27449cf060f9c2dd3f0f6aa" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.579582 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bcnnc" Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.584264 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16"} Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.635143 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.640982 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bcnnc"] Feb 26 08:20:27 crc kubenswrapper[4741]: I0226 08:20:27.795293 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4578f5-1608-403f-9132-5613a1b3a105" path="/var/lib/kubelet/pods/ee4578f5-1608-403f-9132-5613a1b3a105/volumes" Feb 26 08:20:33 crc kubenswrapper[4741]: I0226 08:20:33.081571 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:33 crc kubenswrapper[4741]: I0226 08:20:33.143536 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28q5n" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.050281 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6"] Feb 26 08:20:41 crc kubenswrapper[4741]: E0226 08:20:41.053248 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4578f5-1608-403f-9132-5613a1b3a105" containerName="registry" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.053304 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4578f5-1608-403f-9132-5613a1b3a105" containerName="registry" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.053565 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4578f5-1608-403f-9132-5613a1b3a105" containerName="registry" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.054270 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.061018 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.062867 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.062953 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.062953 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.066411 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6"] Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.067879 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.078299 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mf7r\" (UniqueName: \"kubernetes.io/projected/2c271ebd-6915-4d51-b89d-06f446349bde-kube-api-access-5mf7r\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.078369 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c271ebd-6915-4d51-b89d-06f446349bde-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.078414 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c271ebd-6915-4d51-b89d-06f446349bde-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.180097 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mf7r\" (UniqueName: \"kubernetes.io/projected/2c271ebd-6915-4d51-b89d-06f446349bde-kube-api-access-5mf7r\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.180210 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c271ebd-6915-4d51-b89d-06f446349bde-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.180258 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c271ebd-6915-4d51-b89d-06f446349bde-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.183850 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c271ebd-6915-4d51-b89d-06f446349bde-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.197746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c271ebd-6915-4d51-b89d-06f446349bde-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.204069 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mf7r\" (UniqueName: \"kubernetes.io/projected/2c271ebd-6915-4d51-b89d-06f446349bde-kube-api-access-5mf7r\") pod \"cluster-monitoring-operator-6d5b84845-bk2l6\" (UID: \"2c271ebd-6915-4d51-b89d-06f446349bde\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.388747 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" Feb 26 08:20:41 crc kubenswrapper[4741]: I0226 08:20:41.929885 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6"] Feb 26 08:20:42 crc kubenswrapper[4741]: I0226 08:20:42.680711 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" event={"ID":"2c271ebd-6915-4d51-b89d-06f446349bde","Type":"ContainerStarted","Data":"abe100a1b1c55e354bbce301b4b244ac3df548896db19e921a12a167f3271643"} Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.475802 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz"] Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.477810 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.479807 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-b5cs6" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.480803 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.486455 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz"] Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.532880 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dca8318b-c85c-42b6-a540-fc16d675a3f4-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ll5hz\" (UID: \"dca8318b-c85c-42b6-a540-fc16d675a3f4\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.633785 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dca8318b-c85c-42b6-a540-fc16d675a3f4-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ll5hz\" (UID: \"dca8318b-c85c-42b6-a540-fc16d675a3f4\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.639973 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dca8318b-c85c-42b6-a540-fc16d675a3f4-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ll5hz\" (UID: \"dca8318b-c85c-42b6-a540-fc16d675a3f4\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.696431 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" event={"ID":"2c271ebd-6915-4d51-b89d-06f446349bde","Type":"ContainerStarted","Data":"c413c697e8680effe7a3528430681791690eeff4507f0f5fc82cf65b5e008e5a"} Feb 26 08:20:44 crc kubenswrapper[4741]: I0226 08:20:44.794383 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:45 crc kubenswrapper[4741]: I0226 08:20:45.258005 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bk2l6" podStartSLOduration=2.369507392 podStartE2EDuration="4.257980276s" podCreationTimestamp="2026-02-26 08:20:41 +0000 UTC" firstStartedPulling="2026-02-26 08:20:41.944361806 +0000 UTC m=+476.940299233" lastFinishedPulling="2026-02-26 08:20:43.83283472 +0000 UTC m=+478.828772117" observedRunningTime="2026-02-26 08:20:44.718004719 +0000 UTC m=+479.713942126" watchObservedRunningTime="2026-02-26 08:20:45.257980276 +0000 UTC m=+480.253917673" Feb 26 08:20:45 crc kubenswrapper[4741]: I0226 08:20:45.260282 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz"] Feb 26 08:20:45 crc kubenswrapper[4741]: I0226 08:20:45.720534 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" event={"ID":"dca8318b-c85c-42b6-a540-fc16d675a3f4","Type":"ContainerStarted","Data":"1a46be74a0f8c27b8cf47d92ac3ad2fe3e513be39209ea44959db278940a13d7"} Feb 26 08:20:47 crc kubenswrapper[4741]: I0226 08:20:47.735025 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" event={"ID":"dca8318b-c85c-42b6-a540-fc16d675a3f4","Type":"ContainerStarted","Data":"b3cb823161dfd6a6b63e980eca6ccfabc353b34b9cb0ca7ffd30e10e8cad49e4"} Feb 26 08:20:47 crc kubenswrapper[4741]: I0226 08:20:47.735657 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:47 crc kubenswrapper[4741]: I0226 08:20:47.746779 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" Feb 26 08:20:47 crc kubenswrapper[4741]: I0226 08:20:47.768544 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" podStartSLOduration=1.7966056849999998 podStartE2EDuration="3.768508313s" podCreationTimestamp="2026-02-26 08:20:44 +0000 UTC" firstStartedPulling="2026-02-26 08:20:45.269443665 +0000 UTC m=+480.265381062" lastFinishedPulling="2026-02-26 08:20:47.241346263 +0000 UTC m=+482.237283690" observedRunningTime="2026-02-26 08:20:47.761142732 +0000 UTC m=+482.757080159" watchObservedRunningTime="2026-02-26 08:20:47.768508313 +0000 UTC m=+482.764445740" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.589696 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zghqr"] Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.590893 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.593554 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.594321 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.594921 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.595187 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9p684" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.605276 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zghqr"] Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.702958 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.703029 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/917e3306-53f1-4e6d-af3e-1b71da771a01-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.703076 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.703102 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5bs\" (UniqueName: \"kubernetes.io/projected/917e3306-53f1-4e6d-af3e-1b71da771a01-kube-api-access-qj5bs\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.805298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.805453 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/917e3306-53f1-4e6d-af3e-1b71da771a01-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: E0226 08:20:48.805532 4741 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 26 08:20:48 crc kubenswrapper[4741]: E0226 08:20:48.805629 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls podName:917e3306-53f1-4e6d-af3e-1b71da771a01 nodeName:}" failed. No retries permitted until 2026-02-26 08:20:49.305601228 +0000 UTC m=+484.301538655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls") pod "prometheus-operator-db54df47d-zghqr" (UID: "917e3306-53f1-4e6d-af3e-1b71da771a01") : secret "prometheus-operator-tls" not found Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.805537 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.806026 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5bs\" (UniqueName: \"kubernetes.io/projected/917e3306-53f1-4e6d-af3e-1b71da771a01-kube-api-access-qj5bs\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.807977 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/917e3306-53f1-4e6d-af3e-1b71da771a01-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.816311 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:48 crc kubenswrapper[4741]: I0226 08:20:48.832937 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5bs\" (UniqueName: \"kubernetes.io/projected/917e3306-53f1-4e6d-af3e-1b71da771a01-kube-api-access-qj5bs\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:49 crc kubenswrapper[4741]: I0226 08:20:49.314257 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:49 crc kubenswrapper[4741]: I0226 08:20:49.319809 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/917e3306-53f1-4e6d-af3e-1b71da771a01-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zghqr\" (UID: \"917e3306-53f1-4e6d-af3e-1b71da771a01\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:49 crc kubenswrapper[4741]: I0226 08:20:49.510680 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" Feb 26 08:20:49 crc kubenswrapper[4741]: I0226 08:20:49.745556 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zghqr"] Feb 26 08:20:49 crc kubenswrapper[4741]: W0226 08:20:49.753260 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod917e3306_53f1_4e6d_af3e_1b71da771a01.slice/crio-5cdd2e902c86650c7279113eda896b6665034cf871d17ec47b560f92a6f33e73 WatchSource:0}: Error finding container 5cdd2e902c86650c7279113eda896b6665034cf871d17ec47b560f92a6f33e73: Status 404 returned error can't find the container with id 5cdd2e902c86650c7279113eda896b6665034cf871d17ec47b560f92a6f33e73 Feb 26 08:20:50 crc kubenswrapper[4741]: I0226 08:20:50.757762 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" event={"ID":"917e3306-53f1-4e6d-af3e-1b71da771a01","Type":"ContainerStarted","Data":"5cdd2e902c86650c7279113eda896b6665034cf871d17ec47b560f92a6f33e73"} Feb 26 08:20:51 crc kubenswrapper[4741]: I0226 08:20:51.767950 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" event={"ID":"917e3306-53f1-4e6d-af3e-1b71da771a01","Type":"ContainerStarted","Data":"ffd3000e8bed879aa32b1faf183c6bb183fe544d4ad586d208f1a1e11d188af7"} Feb 26 08:20:51 crc kubenswrapper[4741]: I0226 08:20:51.768405 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" event={"ID":"917e3306-53f1-4e6d-af3e-1b71da771a01","Type":"ContainerStarted","Data":"d0332e53608f106a0dcce699678913d6df4c313645d7f8aa741b17d2d0b006ac"} Feb 26 08:20:51 crc kubenswrapper[4741]: I0226 08:20:51.787996 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-zghqr" podStartSLOduration=2.080662729 podStartE2EDuration="3.787976947s" podCreationTimestamp="2026-02-26 08:20:48 +0000 UTC" firstStartedPulling="2026-02-26 08:20:49.754748401 +0000 UTC m=+484.750685788" lastFinishedPulling="2026-02-26 08:20:51.462062609 +0000 UTC m=+486.458000006" observedRunningTime="2026-02-26 08:20:51.787360849 +0000 UTC m=+486.783298246" watchObservedRunningTime="2026-02-26 08:20:51.787976947 +0000 UTC m=+486.783914334" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.963232 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f"] Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.964962 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.966980 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.967137 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-9vztp" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.967365 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.968415 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d"] Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.969643 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.974720 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.974912 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.975004 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-6kncj" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.975087 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986146 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986194 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986237 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986307 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986349 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986378 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6qs\" (UniqueName: \"kubernetes.io/projected/f80efb3f-e208-4bdc-a15c-bb62d729939d-kube-api-access-kr6qs\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986394 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f80efb3f-e208-4bdc-a15c-bb62d729939d-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986415 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986442 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:53 crc kubenswrapper[4741]: I0226 08:20:53.986467 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrdd\" (UniqueName: \"kubernetes.io/projected/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-api-access-sfrdd\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.004458 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d"] Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.056374 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f"] Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087254 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrdd\" (UniqueName: \"kubernetes.io/projected/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-api-access-sfrdd\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087312 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087345 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087380 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: E0226 08:20:54.087471 4741 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 26 08:20:54 crc kubenswrapper[4741]: E0226 08:20:54.087529 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls podName:b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9 nodeName:}" failed. No retries permitted until 2026-02-26 08:20:54.587508949 +0000 UTC m=+489.583446336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-6sc9d" (UID: "b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9") : secret "kube-state-metrics-tls" not found Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087600 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087718 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6qs\" (UniqueName: \"kubernetes.io/projected/f80efb3f-e208-4bdc-a15c-bb62d729939d-kube-api-access-kr6qs\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087804 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f80efb3f-e208-4bdc-a15c-bb62d729939d-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087848 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.087894 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: E0226 08:20:54.088137 4741 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 26 08:20:54 crc kubenswrapper[4741]: E0226 08:20:54.088267 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls podName:f80efb3f-e208-4bdc-a15c-bb62d729939d nodeName:}" failed. No retries permitted until 2026-02-26 08:20:54.58823546 +0000 UTC m=+489.584172847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-w7c9f" (UID: "f80efb3f-e208-4bdc-a15c-bb62d729939d") : secret "openshift-state-metrics-tls" not found Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.088318 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.088460 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.088726 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.088916 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f80efb3f-e208-4bdc-a15c-bb62d729939d-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.093975 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.113282 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-bzq4x"] Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.114853 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.115746 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.120987 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-49jr7" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.121223 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.121373 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.126928 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6qs\" (UniqueName: \"kubernetes.io/projected/f80efb3f-e208-4bdc-a15c-bb62d729939d-kube-api-access-kr6qs\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.134908 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrdd\" (UniqueName: \"kubernetes.io/projected/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-api-access-sfrdd\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.188912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-wtmp\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.188956 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3004207-48f8-4e96-9496-bfaa35c8534f-metrics-client-ca\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.188988 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-textfile\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.189005 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.189134 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-tls\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.189177 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-sys\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.189195 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hptd\" (UniqueName: \"kubernetes.io/projected/c3004207-48f8-4e96-9496-bfaa35c8534f-kube-api-access-7hptd\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.189326 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-root\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290144 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3004207-48f8-4e96-9496-bfaa35c8534f-metrics-client-ca\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290192 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-wtmp\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290224 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-textfile\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290275 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-tls\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290315 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-sys\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290335 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hptd\" (UniqueName: \"kubernetes.io/projected/c3004207-48f8-4e96-9496-bfaa35c8534f-kube-api-access-7hptd\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290492 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-wtmp\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290521 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-sys\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290937 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-textfile\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290986 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-root\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.290963 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c3004207-48f8-4e96-9496-bfaa35c8534f-root\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.291424 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3004207-48f8-4e96-9496-bfaa35c8534f-metrics-client-ca\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.294166 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.297680 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c3004207-48f8-4e96-9496-bfaa35c8534f-node-exporter-tls\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.306824 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hptd\" (UniqueName: \"kubernetes.io/projected/c3004207-48f8-4e96-9496-bfaa35c8534f-kube-api-access-7hptd\") pod \"node-exporter-bzq4x\" (UID: \"c3004207-48f8-4e96-9496-bfaa35c8534f\") " pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.476910 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-bzq4x" Feb 26 08:20:54 crc kubenswrapper[4741]: W0226 08:20:54.500852 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3004207_48f8_4e96_9496_bfaa35c8534f.slice/crio-f61c577fcaf5aeb38947027a7cdea59b56021d7542f547a395a86050c9093081 WatchSource:0}: Error finding container f61c577fcaf5aeb38947027a7cdea59b56021d7542f547a395a86050c9093081: Status 404 returned error can't find the container with id f61c577fcaf5aeb38947027a7cdea59b56021d7542f547a395a86050c9093081 Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.594722 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.594888 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.601171 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-6sc9d\" (UID: \"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.601404 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f80efb3f-e208-4bdc-a15c-bb62d729939d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w7c9f\" (UID: \"f80efb3f-e208-4bdc-a15c-bb62d729939d\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.790426 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bzq4x" event={"ID":"c3004207-48f8-4e96-9496-bfaa35c8534f","Type":"ContainerStarted","Data":"f61c577fcaf5aeb38947027a7cdea59b56021d7542f547a395a86050c9093081"} Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.888189 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" Feb 26 08:20:54 crc kubenswrapper[4741]: I0226 08:20:54.894683 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.006485 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.011372 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.013919 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.014849 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.014983 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.016035 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.017675 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.019065 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.019241 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-5sq7f" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.019360 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.036218 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.054490 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.207814 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.207883 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk2vv\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-kube-api-access-gk2vv\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.207907 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208014 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208083 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208122 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-volume\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208146 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-tls-assets\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208182 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208205 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-web-config\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208244 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208300 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.208314 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-out\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.229957 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d"] Feb 26 08:20:55 crc kubenswrapper[4741]: W0226 08:20:55.238206 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5bb4d7d_3cee_4eb8_9cf3_f343992a76f9.slice/crio-c571164412c42b647983ed4c0efce29c31f5c8538421ef17a5885b6c81b3ab08 WatchSource:0}: Error finding container c571164412c42b647983ed4c0efce29c31f5c8538421ef17a5885b6c81b3ab08: Status 404 returned error can't find the container with id c571164412c42b647983ed4c0efce29c31f5c8538421ef17a5885b6c81b3ab08 Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.280969 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f"] Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309646 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309697 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-out\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309744 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309787 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2vv\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-kube-api-access-gk2vv\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309823 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309860 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309898 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309926 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-volume\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.309954 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-tls-assets\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.310003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.310032 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-web-config\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.310063 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.314422 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.316892 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.317560 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.321950 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.322622 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.324927 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.325474 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-out\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.326066 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-config-volume\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.328469 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.329502 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-tls-assets\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.331158 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-web-config\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.344044 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2vv\" (UniqueName: \"kubernetes.io/projected/88617ada-5cab-4d96-9fc0-e6c8e0c261ff-kube-api-access-gk2vv\") pod \"alertmanager-main-0\" (UID: \"88617ada-5cab-4d96-9fc0-e6c8e0c261ff\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.636500 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.799329 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" event={"ID":"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9","Type":"ContainerStarted","Data":"c571164412c42b647983ed4c0efce29c31f5c8538421ef17a5885b6c81b3ab08"} Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.801181 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" event={"ID":"f80efb3f-e208-4bdc-a15c-bb62d729939d","Type":"ContainerStarted","Data":"c76816436ad04923ff792b6848964496c4c8388aa0e802ac61714419f0855b1d"} Feb 26 08:20:55 crc kubenswrapper[4741]: I0226 08:20:55.801213 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" event={"ID":"f80efb3f-e208-4bdc-a15c-bb62d729939d","Type":"ContainerStarted","Data":"67dd376d2b8f2c2d317d0311c82b012531b2fc311ea7b74e0b7aeaf6be94d9f5"} Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.014039 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf"] Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.016218 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.018570 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.018657 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.018845 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.018851 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.019019 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.019181 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-1ikmh551r9cjh" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.021697 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zmjxj" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.034435 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf"] Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.130398 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.130918 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.130955 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkj4w\" (UniqueName: \"kubernetes.io/projected/8878d1eb-ece5-4e57-aa4b-9997e84f5968-kube-api-access-pkj4w\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.130976 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.130992 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.131074 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.131188 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8878d1eb-ece5-4e57-aa4b-9997e84f5968-metrics-client-ca\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.131209 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-grpc-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.178384 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 08:20:56 crc kubenswrapper[4741]: W0226 08:20:56.196790 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88617ada_5cab_4d96_9fc0_e6c8e0c261ff.slice/crio-f6f317225a7c9e0e96934824de6cdc4cd0f98498fcf7331b0550ae19eebd058b WatchSource:0}: Error finding container f6f317225a7c9e0e96934824de6cdc4cd0f98498fcf7331b0550ae19eebd058b: Status 404 returned error can't find the container with id f6f317225a7c9e0e96934824de6cdc4cd0f98498fcf7331b0550ae19eebd058b Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232540 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkj4w\" (UniqueName: \"kubernetes.io/projected/8878d1eb-ece5-4e57-aa4b-9997e84f5968-kube-api-access-pkj4w\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232583 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232610 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232651 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8878d1eb-ece5-4e57-aa4b-9997e84f5968-metrics-client-ca\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232670 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-grpc-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.232710 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.234066 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8878d1eb-ece5-4e57-aa4b-9997e84f5968-metrics-client-ca\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.240311 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.240505 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.243841 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.244004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-grpc-tls\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.244802 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.249905 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkj4w\" (UniqueName: \"kubernetes.io/projected/8878d1eb-ece5-4e57-aa4b-9997e84f5968-kube-api-access-pkj4w\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.253376 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8878d1eb-ece5-4e57-aa4b-9997e84f5968-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bcdd678f4-p8jlf\" (UID: \"8878d1eb-ece5-4e57-aa4b-9997e84f5968\") " pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.334205 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.809780 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf"] Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.815485 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"f6f317225a7c9e0e96934824de6cdc4cd0f98498fcf7331b0550ae19eebd058b"} Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.818075 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" event={"ID":"f80efb3f-e208-4bdc-a15c-bb62d729939d","Type":"ContainerStarted","Data":"fdb451d55248f70ae9def69dcd8d07ce655c666efbcfc9c7f833e3dbfce8ee12"} Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.820058 4741 generic.go:334] "Generic (PLEG): container finished" podID="c3004207-48f8-4e96-9496-bfaa35c8534f" containerID="4c605668355d5f5a7e84f7833dbabfa91d423799169cf8050b86eb44f965a88c" exitCode=0 Feb 26 08:20:56 crc kubenswrapper[4741]: I0226 08:20:56.820129 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bzq4x" event={"ID":"c3004207-48f8-4e96-9496-bfaa35c8534f","Type":"ContainerDied","Data":"4c605668355d5f5a7e84f7833dbabfa91d423799169cf8050b86eb44f965a88c"} Feb 26 08:20:56 crc kubenswrapper[4741]: W0226 08:20:56.875801 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8878d1eb_ece5_4e57_aa4b_9997e84f5968.slice/crio-e821c612683380395c7560813351b605f019007631f925663e4194f8c5f5a1b8 WatchSource:0}: Error finding container e821c612683380395c7560813351b605f019007631f925663e4194f8c5f5a1b8: Status 404 returned error can't find the container with id e821c612683380395c7560813351b605f019007631f925663e4194f8c5f5a1b8 Feb 26 08:20:57 crc kubenswrapper[4741]: I0226 08:20:57.827893 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"e821c612683380395c7560813351b605f019007631f925663e4194f8c5f5a1b8"} Feb 26 08:20:57 crc kubenswrapper[4741]: I0226 08:20:57.829555 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" event={"ID":"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9","Type":"ContainerStarted","Data":"1bd30750b134294876c66cc0520801269bbc91623e3d00e9696e8aff1a6b309b"} Feb 26 08:20:57 crc kubenswrapper[4741]: I0226 08:20:57.832029 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bzq4x" event={"ID":"c3004207-48f8-4e96-9496-bfaa35c8534f","Type":"ContainerStarted","Data":"febb57407cc4ffc2c4f2c85a291830c00f9108078b8fd723342e97f0958bc074"} Feb 26 08:20:57 crc kubenswrapper[4741]: I0226 08:20:57.832055 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bzq4x" event={"ID":"c3004207-48f8-4e96-9496-bfaa35c8534f","Type":"ContainerStarted","Data":"dc8156872d1485d5d6d4c12f3801c1959d460bf86f016ad43019bc0d52ff1db4"} Feb 26 08:20:57 crc kubenswrapper[4741]: I0226 08:20:57.861889 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-bzq4x" podStartSLOduration=2.609671648 podStartE2EDuration="3.861859713s" podCreationTimestamp="2026-02-26 08:20:54 +0000 UTC" firstStartedPulling="2026-02-26 08:20:54.504992923 +0000 UTC m=+489.500930310" lastFinishedPulling="2026-02-26 08:20:55.757180978 +0000 UTC m=+490.753118375" observedRunningTime="2026-02-26 08:20:57.856907191 +0000 UTC m=+492.852844588" watchObservedRunningTime="2026-02-26 08:20:57.861859713 +0000 UTC m=+492.857797120" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.746476 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.747368 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.768464 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.840240 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" event={"ID":"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9","Type":"ContainerStarted","Data":"6430dde3facd73dac2a028a7a1de667ddd72a865a200be3fc2608778594d93b4"} Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.840312 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" event={"ID":"b5bb4d7d-3cee-4eb8-9cf3-f343992a76f9","Type":"ContainerStarted","Data":"1154462f4283c4d7ab0d92617c267891e6fac1569952500238775f37688e6eb1"} Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.844043 4741 generic.go:334] "Generic (PLEG): container finished" podID="88617ada-5cab-4d96-9fc0-e6c8e0c261ff" containerID="e7fc04d633c37839dee805d4f385772fb2f28350fe90eb2b8c9c1b3750131785" exitCode=0 Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.844318 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerDied","Data":"e7fc04d633c37839dee805d4f385772fb2f28350fe90eb2b8c9c1b3750131785"} Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.849146 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" event={"ID":"f80efb3f-e208-4bdc-a15c-bb62d729939d","Type":"ContainerStarted","Data":"87bc22bbd5efeb6c7c36b576eff612ad05e6f760ffae64f21de305580b2b2b57"} Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.860610 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-6sc9d" podStartSLOduration=4.150685627 podStartE2EDuration="5.860588599s" podCreationTimestamp="2026-02-26 08:20:53 +0000 UTC" firstStartedPulling="2026-02-26 08:20:55.243086733 +0000 UTC m=+490.239024120" lastFinishedPulling="2026-02-26 08:20:56.952989705 +0000 UTC m=+491.948927092" observedRunningTime="2026-02-26 08:20:58.858060926 +0000 UTC m=+493.853998323" watchObservedRunningTime="2026-02-26 08:20:58.860588599 +0000 UTC m=+493.856525986" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879028 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879197 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp9d\" (UniqueName: \"kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879319 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879372 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879418 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.879463 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.981437 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.981517 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.981951 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.982002 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.982023 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.982150 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.982191 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp9d\" (UniqueName: \"kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.982761 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.983334 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.983580 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.984068 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:58 crc kubenswrapper[4741]: I0226 08:20:58.996917 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.008967 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.012180 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp9d\" (UniqueName: \"kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d\") pod \"console-7b557c8ffc-ptfnc\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.066820 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.337818 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w7c9f" podStartSLOduration=4.228848969 podStartE2EDuration="6.337786176s" podCreationTimestamp="2026-02-26 08:20:53 +0000 UTC" firstStartedPulling="2026-02-26 08:20:55.918392462 +0000 UTC m=+490.914329849" lastFinishedPulling="2026-02-26 08:20:58.027329669 +0000 UTC m=+493.023267056" observedRunningTime="2026-02-26 08:20:58.929563367 +0000 UTC m=+493.925500774" watchObservedRunningTime="2026-02-26 08:20:59.337786176 +0000 UTC m=+494.333723593" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.340494 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-565b5fc49-5lpkb"] Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.341627 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.345814 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9k7b9" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.346098 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.346328 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.347414 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.347502 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-1amuebsbrfed9" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.349337 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.354683 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-565b5fc49-5lpkb"] Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.490737 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e03473fd-4571-48a7-8eb0-93beb64488e7-audit-log\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.490833 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.490859 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-metrics-server-audit-profiles\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.490963 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5p52\" (UniqueName: \"kubernetes.io/projected/e03473fd-4571-48a7-8eb0-93beb64488e7-kube-api-access-s5p52\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.491170 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-client-certs\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.491405 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-client-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.491453 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-server-tls\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.592883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e03473fd-4571-48a7-8eb0-93beb64488e7-audit-log\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593264 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5p52\" (UniqueName: \"kubernetes.io/projected/e03473fd-4571-48a7-8eb0-93beb64488e7-kube-api-access-s5p52\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593294 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593318 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-metrics-server-audit-profiles\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593358 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-client-certs\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593413 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-server-tls\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593425 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e03473fd-4571-48a7-8eb0-93beb64488e7-audit-log\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.593439 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-client-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.596882 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.597535 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e03473fd-4571-48a7-8eb0-93beb64488e7-metrics-server-audit-profiles\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.599072 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-server-tls\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.599691 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-secret-metrics-client-certs\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.607922 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03473fd-4571-48a7-8eb0-93beb64488e7-client-ca-bundle\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.611258 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5p52\" (UniqueName: \"kubernetes.io/projected/e03473fd-4571-48a7-8eb0-93beb64488e7-kube-api-access-s5p52\") pod \"metrics-server-565b5fc49-5lpkb\" (UID: \"e03473fd-4571-48a7-8eb0-93beb64488e7\") " pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.687614 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.740330 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp"] Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.741258 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.743837 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.743989 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.756993 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp"] Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.862723 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"9c8a64cc967e4648f0cca1044c6722d8f12edcdceed57a3f1b22f7e1af9440c9"} Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.897631 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c16d8877-c85a-45f3-b358-cacde9af090f-monitoring-plugin-cert\") pod \"monitoring-plugin-77bff4d7bd-72zxp\" (UID: \"c16d8877-c85a-45f3-b358-cacde9af090f\") " pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.966234 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-565b5fc49-5lpkb"] Feb 26 08:20:59 crc kubenswrapper[4741]: W0226 08:20:59.979643 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode03473fd_4571_48a7_8eb0_93beb64488e7.slice/crio-8ca8a419b1a87fae59dd0af2735958e75a28b6d4fb21e7a5db5d76e07e439ed7 WatchSource:0}: Error finding container 8ca8a419b1a87fae59dd0af2735958e75a28b6d4fb21e7a5db5d76e07e439ed7: Status 404 returned error can't find the container with id 8ca8a419b1a87fae59dd0af2735958e75a28b6d4fb21e7a5db5d76e07e439ed7 Feb 26 08:20:59 crc kubenswrapper[4741]: I0226 08:20:59.999167 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c16d8877-c85a-45f3-b358-cacde9af090f-monitoring-plugin-cert\") pod \"monitoring-plugin-77bff4d7bd-72zxp\" (UID: \"c16d8877-c85a-45f3-b358-cacde9af090f\") " pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.008300 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c16d8877-c85a-45f3-b358-cacde9af090f-monitoring-plugin-cert\") pod \"monitoring-plugin-77bff4d7bd-72zxp\" (UID: \"c16d8877-c85a-45f3-b358-cacde9af090f\") " pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.017680 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.067783 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.313632 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp"] Feb 26 08:21:00 crc kubenswrapper[4741]: W0226 08:21:00.329229 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc16d8877_c85a_45f3_b358_cacde9af090f.slice/crio-06b30ada3539d9f3cf46b86892d6f6285990c757a846a573a0db4b1f3acf5586 WatchSource:0}: Error finding container 06b30ada3539d9f3cf46b86892d6f6285990c757a846a573a0db4b1f3acf5586: Status 404 returned error can't find the container with id 06b30ada3539d9f3cf46b86892d6f6285990c757a846a573a0db4b1f3acf5586 Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.400871 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.403266 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.406586 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-xb8vl" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.410206 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.410675 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.410773 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.410851 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.411085 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.411361 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.411096 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.412228 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.413468 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-d7tl6sjnu6bo4" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.418286 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.419444 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.422008 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.422861 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520172 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520295 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520332 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-web-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520416 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520507 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520546 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520657 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520709 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520747 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520787 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520820 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520845 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.520863 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zplht\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-kube-api-access-zplht\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.521065 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.521165 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.521211 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-config-out\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.521265 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.521296 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.623503 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627290 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.625700 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627350 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627388 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627441 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627505 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627529 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.627981 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628058 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zplht\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-kube-api-access-zplht\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628179 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628218 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628654 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628796 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-config-out\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628893 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628952 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.628990 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629037 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-web-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629163 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629474 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629625 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.629691 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.634791 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.646639 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.648602 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.649221 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zplht\" (UniqueName: \"kubernetes.io/projected/816448f3-dfc3-4045-834a-c82c2a4e0589-kube-api-access-zplht\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.650512 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.659785 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/816448f3-dfc3-4045-834a-c82c2a4e0589-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.672726 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/816448f3-dfc3-4045-834a-c82c2a4e0589-config-out\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.673469 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.673528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-web-config\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.673651 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.673888 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.677681 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.679656 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/816448f3-dfc3-4045-834a-c82c2a4e0589-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"816448f3-dfc3-4045-834a-c82c2a4e0589\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.768761 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.870637 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"a8a3183bde71555face4fb24ef3c6ba9cb20b5978b57920b4c36c672d75dc0db"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.870687 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"e932328cae89a1ee81358ca9b40dc67781c905a880c8a06922781a504f9c8cdc"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.871390 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" event={"ID":"e03473fd-4571-48a7-8eb0-93beb64488e7","Type":"ContainerStarted","Data":"8ca8a419b1a87fae59dd0af2735958e75a28b6d4fb21e7a5db5d76e07e439ed7"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.872061 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" event={"ID":"c16d8877-c85a-45f3-b358-cacde9af090f","Type":"ContainerStarted","Data":"06b30ada3539d9f3cf46b86892d6f6285990c757a846a573a0db4b1f3acf5586"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.887430 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b557c8ffc-ptfnc" event={"ID":"088d72b7-98a1-49fb-8ca7-4e88f85dcf30","Type":"ContainerStarted","Data":"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.887518 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b557c8ffc-ptfnc" event={"ID":"088d72b7-98a1-49fb-8ca7-4e88f85dcf30","Type":"ContainerStarted","Data":"565cf6504d93d9589034bdf1428cc522f78d758eb6ae7644a9772d1a79a2972a"} Feb 26 08:21:00 crc kubenswrapper[4741]: I0226 08:21:00.914953 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b557c8ffc-ptfnc" podStartSLOduration=2.91493 podStartE2EDuration="2.91493s" podCreationTimestamp="2026-02-26 08:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:21:00.908502746 +0000 UTC m=+495.904440163" watchObservedRunningTime="2026-02-26 08:21:00.91493 +0000 UTC m=+495.910867387" Feb 26 08:21:01 crc kubenswrapper[4741]: I0226 08:21:01.022687 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 08:21:01 crc kubenswrapper[4741]: W0226 08:21:01.681744 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod816448f3_dfc3_4045_834a_c82c2a4e0589.slice/crio-a12bfdc238fc14698cedea5bef5cdf64ad331f690a16c10aea2dedfa9a8a3db9 WatchSource:0}: Error finding container a12bfdc238fc14698cedea5bef5cdf64ad331f690a16c10aea2dedfa9a8a3db9: Status 404 returned error can't find the container with id a12bfdc238fc14698cedea5bef5cdf64ad331f690a16c10aea2dedfa9a8a3db9 Feb 26 08:21:01 crc kubenswrapper[4741]: I0226 08:21:01.895781 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"a12bfdc238fc14698cedea5bef5cdf64ad331f690a16c10aea2dedfa9a8a3db9"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.930069 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"35ae95b03a578e5e4cf4f6b9c385d34b8e534f4b390faa2db06c56caf09e0550"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.931394 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"30db17ebfe5d3f69ecdfa5b11a8719f8b7e0bab9c3ac0ea7f8d9f3abcf0151f4"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.931428 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" event={"ID":"8878d1eb-ece5-4e57-aa4b-9997e84f5968","Type":"ContainerStarted","Data":"7b8db6eef31679c2274de1a65ce18a566b34c729fa7b812f12759a5f02c47133"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.931462 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.932689 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" event={"ID":"e03473fd-4571-48a7-8eb0-93beb64488e7","Type":"ContainerStarted","Data":"486f471cf07f8ab1edf57980b123698b4e9fdb935ae0098d56f07723a738fc75"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.938425 4741 generic.go:334] "Generic (PLEG): container finished" podID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerID="1d27ce7fb26b92967f13b1245a60feb785e3ad51713754e64d92bf83ff221c98" exitCode=0 Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.938492 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerDied","Data":"1d27ce7fb26b92967f13b1245a60feb785e3ad51713754e64d92bf83ff221c98"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.943978 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"9a79c5a70750a80fbe0f27643c5403d93abddfe1a1184097faab294d3e2bce0a"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.944010 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"8e47d681bb52ee84abbf4b15b97df3b7cb513a289d060dd48cf884239ec243be"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.944023 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"5a68fe05240cec14d8fad47aa3d19c7df782f457f224444956b7aac44f7cc608"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.944038 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"dc9c49bf3a8d958a793b571548025bd022f335e5b089c7eb09e529e7079405de"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.948148 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" event={"ID":"c16d8877-c85a-45f3-b358-cacde9af090f","Type":"ContainerStarted","Data":"e592cf04c0f5038db5a77279b8df1f5899a6f46d8ca815955190262bd7cddfd4"} Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.948919 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.957450 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" Feb 26 08:21:03 crc kubenswrapper[4741]: I0226 08:21:03.968340 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" podStartSLOduration=3.028199793 podStartE2EDuration="8.968308626s" podCreationTimestamp="2026-02-26 08:20:55 +0000 UTC" firstStartedPulling="2026-02-26 08:20:56.880880607 +0000 UTC m=+491.876817994" lastFinishedPulling="2026-02-26 08:21:02.8209894 +0000 UTC m=+497.816926827" observedRunningTime="2026-02-26 08:21:03.965757673 +0000 UTC m=+498.961695100" watchObservedRunningTime="2026-02-26 08:21:03.968308626 +0000 UTC m=+498.964246023" Feb 26 08:21:04 crc kubenswrapper[4741]: I0226 08:21:04.041928 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" podStartSLOduration=2.55124608 podStartE2EDuration="5.041878366s" podCreationTimestamp="2026-02-26 08:20:59 +0000 UTC" firstStartedPulling="2026-02-26 08:21:00.336527261 +0000 UTC m=+495.332464638" lastFinishedPulling="2026-02-26 08:21:02.827159507 +0000 UTC m=+497.823096924" observedRunningTime="2026-02-26 08:21:04.040080184 +0000 UTC m=+499.036017581" watchObservedRunningTime="2026-02-26 08:21:04.041878366 +0000 UTC m=+499.037815793" Feb 26 08:21:04 crc kubenswrapper[4741]: I0226 08:21:04.074916 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" podStartSLOduration=2.227442733 podStartE2EDuration="5.074890223s" podCreationTimestamp="2026-02-26 08:20:59 +0000 UTC" firstStartedPulling="2026-02-26 08:20:59.983692361 +0000 UTC m=+494.979629748" lastFinishedPulling="2026-02-26 08:21:02.831139801 +0000 UTC m=+497.827077238" observedRunningTime="2026-02-26 08:21:04.07131677 +0000 UTC m=+499.067254167" watchObservedRunningTime="2026-02-26 08:21:04.074890223 +0000 UTC m=+499.070827610" Feb 26 08:21:04 crc kubenswrapper[4741]: I0226 08:21:04.963378 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"063a2aac1a3abddf5fdc103ab7f9c43e8baf092afb062dd74ec152b466881137"} Feb 26 08:21:04 crc kubenswrapper[4741]: I0226 08:21:04.963444 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"88617ada-5cab-4d96-9fc0-e6c8e0c261ff","Type":"ContainerStarted","Data":"6354984fdac78c305db6c92d9097acfda08fd5b3db32bc7bb8958281910e52ec"} Feb 26 08:21:04 crc kubenswrapper[4741]: I0226 08:21:04.986950 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" Feb 26 08:21:05 crc kubenswrapper[4741]: I0226 08:21:05.010784 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.382245488 podStartE2EDuration="11.010763426s" podCreationTimestamp="2026-02-26 08:20:54 +0000 UTC" firstStartedPulling="2026-02-26 08:20:56.198540526 +0000 UTC m=+491.194477913" lastFinishedPulling="2026-02-26 08:21:02.827058424 +0000 UTC m=+497.822995851" observedRunningTime="2026-02-26 08:21:04.998541435 +0000 UTC m=+499.994478852" watchObservedRunningTime="2026-02-26 08:21:05.010763426 +0000 UTC m=+500.006700813" Feb 26 08:21:07 crc kubenswrapper[4741]: I0226 08:21:07.992546 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"b3907e0597addd2bed1df56638df53a7dee6f0162c0a262893ee3ec45d20cfc8"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.010337 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"d0e0224727a95155846bc52b7815b232f631b989b30cf32da93ffc33381e36f6"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.010407 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"154f354ee70fab8e5c6b0065542170d4a386ec98589d454e8bac8301b323113f"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.010428 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"b96a53099899302ede5cdce9384557ccd5e148203b63e3f632e2f3c84d5dba2c"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.010446 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"113e127ff66a986aae1c6860eb4604fe2d5467ce9473c3a84475a6d71536b8e2"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.010462 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"816448f3-dfc3-4045-834a-c82c2a4e0589","Type":"ContainerStarted","Data":"ee9644171b562910a82209d0a14c299604a950d6ca11619cfac0080cb317b702"} Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.069057 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.069169 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.087627 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.238591131 podStartE2EDuration="9.087582556s" podCreationTimestamp="2026-02-26 08:21:00 +0000 UTC" firstStartedPulling="2026-02-26 08:21:03.94125354 +0000 UTC m=+498.937190967" lastFinishedPulling="2026-02-26 08:21:07.790245005 +0000 UTC m=+502.786182392" observedRunningTime="2026-02-26 08:21:09.05776392 +0000 UTC m=+504.053701367" watchObservedRunningTime="2026-02-26 08:21:09.087582556 +0000 UTC m=+504.083519983" Feb 26 08:21:09 crc kubenswrapper[4741]: I0226 08:21:09.088321 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:21:10 crc kubenswrapper[4741]: I0226 08:21:10.023211 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:21:10 crc kubenswrapper[4741]: I0226 08:21:10.102213 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:21:10 crc kubenswrapper[4741]: I0226 08:21:10.769982 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:21:19 crc kubenswrapper[4741]: I0226 08:21:19.688246 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:21:19 crc kubenswrapper[4741]: I0226 08:21:19.688924 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.164997 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hdqgn" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerName="console" containerID="cri-o://1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be" gracePeriod=15 Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.673755 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdqgn_a1087876-b61e-42ed-bd63-0ede0e6a09e3/console/0.log" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.674126 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.841352 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlxgn\" (UniqueName: \"kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842059 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842186 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842394 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842447 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842502 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.842554 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config\") pod \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\" (UID: \"a1087876-b61e-42ed-bd63-0ede0e6a09e3\") " Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.843428 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.843510 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config" (OuterVolumeSpecName: "console-config") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.843566 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.843594 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.844194 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.844224 4741 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.844241 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.844258 4741 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.851169 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.851533 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn" (OuterVolumeSpecName: "kube-api-access-zlxgn") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "kube-api-access-zlxgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.851914 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a1087876-b61e-42ed-bd63-0ede0e6a09e3" (UID: "a1087876-b61e-42ed-bd63-0ede0e6a09e3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.945677 4741 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.945717 4741 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1087876-b61e-42ed-bd63-0ede0e6a09e3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:35 crc kubenswrapper[4741]: I0226 08:21:35.945734 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlxgn\" (UniqueName: \"kubernetes.io/projected/a1087876-b61e-42ed-bd63-0ede0e6a09e3-kube-api-access-zlxgn\") on node \"crc\" DevicePath \"\"" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252009 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdqgn_a1087876-b61e-42ed-bd63-0ede0e6a09e3/console/0.log" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252088 4741 generic.go:334] "Generic (PLEG): container finished" podID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerID="1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be" exitCode=2 Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252169 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdqgn" event={"ID":"a1087876-b61e-42ed-bd63-0ede0e6a09e3","Type":"ContainerDied","Data":"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be"} Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252220 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdqgn" event={"ID":"a1087876-b61e-42ed-bd63-0ede0e6a09e3","Type":"ContainerDied","Data":"434cf3f8ea59954ee8ad3512f97c42935f5ea4c8319c0e4e57eee6b7231fe98d"} Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252248 4741 scope.go:117] "RemoveContainer" containerID="1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.252278 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdqgn" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.284117 4741 scope.go:117] "RemoveContainer" containerID="1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be" Feb 26 08:21:36 crc kubenswrapper[4741]: E0226 08:21:36.284644 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be\": container with ID starting with 1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be not found: ID does not exist" containerID="1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.284679 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be"} err="failed to get container status \"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be\": rpc error: code = NotFound desc = could not find container \"1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be\": container with ID starting with 1f611af26e400f5cb9eb3967d775dc7d79dc72e69b9fa73b7b0e462ac1f6a7be not found: ID does not exist" Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.299051 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:21:36 crc kubenswrapper[4741]: I0226 08:21:36.304007 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hdqgn"] Feb 26 08:21:37 crc kubenswrapper[4741]: I0226 08:21:37.804163 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" path="/var/lib/kubelet/pods/a1087876-b61e-42ed-bd63-0ede0e6a09e3/volumes" Feb 26 08:21:39 crc kubenswrapper[4741]: I0226 08:21:39.698011 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:21:39 crc kubenswrapper[4741]: I0226 08:21:39.704226 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.135677 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534902-k8qlb"] Feb 26 08:22:00 crc kubenswrapper[4741]: E0226 08:22:00.136716 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerName="console" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.136731 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerName="console" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.136855 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1087876-b61e-42ed-bd63-0ede0e6a09e3" containerName="console" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.137456 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.144754 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.144976 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.145142 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.159561 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534902-k8qlb"] Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.306895 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbjs\" (UniqueName: \"kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs\") pod \"auto-csr-approver-29534902-k8qlb\" (UID: \"1fc0b141-0dab-4fa4-923b-af343e6ecb35\") " pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.408677 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbjs\" (UniqueName: \"kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs\") pod \"auto-csr-approver-29534902-k8qlb\" (UID: \"1fc0b141-0dab-4fa4-923b-af343e6ecb35\") " pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.438799 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbjs\" (UniqueName: \"kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs\") pod \"auto-csr-approver-29534902-k8qlb\" (UID: \"1fc0b141-0dab-4fa4-923b-af343e6ecb35\") " pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.469948 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.759690 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534902-k8qlb"] Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.767589 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.769468 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:22:00 crc kubenswrapper[4741]: I0226 08:22:00.806221 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:22:01 crc kubenswrapper[4741]: I0226 08:22:01.481843 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" event={"ID":"1fc0b141-0dab-4fa4-923b-af343e6ecb35","Type":"ContainerStarted","Data":"464691d09d834f404cda3e5626818e1362b9f2b9fef5d634a3211ed820c91eaa"} Feb 26 08:22:01 crc kubenswrapper[4741]: I0226 08:22:01.533738 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 08:22:03 crc kubenswrapper[4741]: I0226 08:22:03.503389 4741 generic.go:334] "Generic (PLEG): container finished" podID="1fc0b141-0dab-4fa4-923b-af343e6ecb35" containerID="cdc25ff52f1c1423806a637bd99f80e8abefaa09fbd4f8f97663ef4b1a67d829" exitCode=0 Feb 26 08:22:03 crc kubenswrapper[4741]: I0226 08:22:03.503482 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" event={"ID":"1fc0b141-0dab-4fa4-923b-af343e6ecb35","Type":"ContainerDied","Data":"cdc25ff52f1c1423806a637bd99f80e8abefaa09fbd4f8f97663ef4b1a67d829"} Feb 26 08:22:04 crc kubenswrapper[4741]: I0226 08:22:04.834601 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:04 crc kubenswrapper[4741]: I0226 08:22:04.902212 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llbjs\" (UniqueName: \"kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs\") pod \"1fc0b141-0dab-4fa4-923b-af343e6ecb35\" (UID: \"1fc0b141-0dab-4fa4-923b-af343e6ecb35\") " Feb 26 08:22:04 crc kubenswrapper[4741]: I0226 08:22:04.910845 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs" (OuterVolumeSpecName: "kube-api-access-llbjs") pod "1fc0b141-0dab-4fa4-923b-af343e6ecb35" (UID: "1fc0b141-0dab-4fa4-923b-af343e6ecb35"). InnerVolumeSpecName "kube-api-access-llbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.004295 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llbjs\" (UniqueName: \"kubernetes.io/projected/1fc0b141-0dab-4fa4-923b-af343e6ecb35-kube-api-access-llbjs\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.527466 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" event={"ID":"1fc0b141-0dab-4fa4-923b-af343e6ecb35","Type":"ContainerDied","Data":"464691d09d834f404cda3e5626818e1362b9f2b9fef5d634a3211ed820c91eaa"} Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.527550 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464691d09d834f404cda3e5626818e1362b9f2b9fef5d634a3211ed820c91eaa" Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.527612 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534902-k8qlb" Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.919557 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534896-rcrbz"] Feb 26 08:22:05 crc kubenswrapper[4741]: I0226 08:22:05.925730 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534896-rcrbz"] Feb 26 08:22:07 crc kubenswrapper[4741]: I0226 08:22:07.799280 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565843a6-5907-4445-9686-cb92b1a56bec" path="/var/lib/kubelet/pods/565843a6-5907-4445-9686-cb92b1a56bec/volumes" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.807852 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:22:17 crc kubenswrapper[4741]: E0226 08:22:17.810647 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc0b141-0dab-4fa4-923b-af343e6ecb35" containerName="oc" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.811484 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc0b141-0dab-4fa4-923b-af343e6ecb35" containerName="oc" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.811740 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc0b141-0dab-4fa4-923b-af343e6ecb35" containerName="oc" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.813333 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.828175 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945455 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945507 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvtc\" (UniqueName: \"kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945592 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945725 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945772 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.945970 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:17 crc kubenswrapper[4741]: I0226 08:22:17.946054 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047549 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047638 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047713 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047752 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047859 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047893 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvtc\" (UniqueName: \"kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.047939 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.049780 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.049920 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.050067 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.050552 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.056411 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.058437 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.083991 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvtc\" (UniqueName: \"kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc\") pod \"console-6b59954f46-5k6gz\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.145272 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.492013 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:22:18 crc kubenswrapper[4741]: I0226 08:22:18.663525 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b59954f46-5k6gz" event={"ID":"58b8cb21-1eba-4ae6-84d5-64c306112b53","Type":"ContainerStarted","Data":"f867639c9e87f942f7e1701e448f13a65d09401363826b9fa588a75145974ea4"} Feb 26 08:22:19 crc kubenswrapper[4741]: I0226 08:22:19.676472 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b59954f46-5k6gz" event={"ID":"58b8cb21-1eba-4ae6-84d5-64c306112b53","Type":"ContainerStarted","Data":"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49"} Feb 26 08:22:19 crc kubenswrapper[4741]: I0226 08:22:19.710978 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b59954f46-5k6gz" podStartSLOduration=2.710950902 podStartE2EDuration="2.710950902s" podCreationTimestamp="2026-02-26 08:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:22:19.705575767 +0000 UTC m=+574.701513174" watchObservedRunningTime="2026-02-26 08:22:19.710950902 +0000 UTC m=+574.706888289" Feb 26 08:22:28 crc kubenswrapper[4741]: I0226 08:22:28.146279 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:28 crc kubenswrapper[4741]: I0226 08:22:28.146943 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:28 crc kubenswrapper[4741]: I0226 08:22:28.154714 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:28 crc kubenswrapper[4741]: I0226 08:22:28.784210 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:22:28 crc kubenswrapper[4741]: I0226 08:22:28.870716 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:22:53 crc kubenswrapper[4741]: I0226 08:22:53.928024 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b557c8ffc-ptfnc" podUID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" containerName="console" containerID="cri-o://e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee" gracePeriod=15 Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.001635 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b557c8ffc-ptfnc_088d72b7-98a1-49fb-8ca7-4e88f85dcf30/console/0.log" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.002409 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049576 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b557c8ffc-ptfnc_088d72b7-98a1-49fb-8ca7-4e88f85dcf30/console/0.log" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049664 4741 generic.go:334] "Generic (PLEG): container finished" podID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" containerID="e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee" exitCode=2 Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049704 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b557c8ffc-ptfnc" event={"ID":"088d72b7-98a1-49fb-8ca7-4e88f85dcf30","Type":"ContainerDied","Data":"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee"} Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049739 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b557c8ffc-ptfnc" event={"ID":"088d72b7-98a1-49fb-8ca7-4e88f85dcf30","Type":"ContainerDied","Data":"565cf6504d93d9589034bdf1428cc522f78d758eb6ae7644a9772d1a79a2972a"} Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049764 4741 scope.go:117] "RemoveContainer" containerID="e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.049971 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b557c8ffc-ptfnc" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.055640 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.055717 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.055827 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.055890 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhp9d\" (UniqueName: \"kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.055958 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.056146 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.056216 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert\") pod \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\" (UID: \"088d72b7-98a1-49fb-8ca7-4e88f85dcf30\") " Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.057547 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.057515 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config" (OuterVolumeSpecName: "console-config") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.058275 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca" (OuterVolumeSpecName: "service-ca") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.058601 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.067842 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d" (OuterVolumeSpecName: "kube-api-access-lhp9d") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "kube-api-access-lhp9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.068508 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.069364 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "088d72b7-98a1-49fb-8ca7-4e88f85dcf30" (UID: "088d72b7-98a1-49fb-8ca7-4e88f85dcf30"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.079466 4741 scope.go:117] "RemoveContainer" containerID="e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee" Feb 26 08:22:55 crc kubenswrapper[4741]: E0226 08:22:55.082551 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee\": container with ID starting with e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee not found: ID does not exist" containerID="e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.082602 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee"} err="failed to get container status \"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee\": rpc error: code = NotFound desc = could not find container \"e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee\": container with ID starting with e33213b73848fd463be9b2a8377bd333ea96c111cbe212a5d03fa31646ff6fee not found: ID does not exist" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.149840 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.149949 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158687 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158781 4741 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158808 4741 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158831 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158850 4741 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158868 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhp9d\" (UniqueName: \"kubernetes.io/projected/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-kube-api-access-lhp9d\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.158887 4741 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/088d72b7-98a1-49fb-8ca7-4e88f85dcf30-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.403049 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.411385 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b557c8ffc-ptfnc"] Feb 26 08:22:55 crc kubenswrapper[4741]: I0226 08:22:55.806035 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" path="/var/lib/kubelet/pods/088d72b7-98a1-49fb-8ca7-4e88f85dcf30/volumes" Feb 26 08:23:25 crc kubenswrapper[4741]: I0226 08:23:25.149701 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:23:25 crc kubenswrapper[4741]: I0226 08:23:25.150706 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:23:55 crc kubenswrapper[4741]: I0226 08:23:55.149208 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:23:55 crc kubenswrapper[4741]: I0226 08:23:55.150096 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:23:55 crc kubenswrapper[4741]: I0226 08:23:55.150208 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:23:55 crc kubenswrapper[4741]: I0226 08:23:55.151087 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:23:55 crc kubenswrapper[4741]: I0226 08:23:55.151228 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16" gracePeriod=600 Feb 26 08:23:56 crc kubenswrapper[4741]: I0226 08:23:56.210544 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16" exitCode=0 Feb 26 08:23:56 crc kubenswrapper[4741]: I0226 08:23:56.210650 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16"} Feb 26 08:23:56 crc kubenswrapper[4741]: I0226 08:23:56.211752 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab"} Feb 26 08:23:56 crc kubenswrapper[4741]: I0226 08:23:56.211788 4741 scope.go:117] "RemoveContainer" containerID="a57bab21a45c9d03ead0ef63b5fa03a6a68d8b3aa5ce11c056c77e586f94dc22" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.163191 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534904-2jxvw"] Feb 26 08:24:00 crc kubenswrapper[4741]: E0226 08:24:00.164037 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" containerName="console" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.164062 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" containerName="console" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.164335 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="088d72b7-98a1-49fb-8ca7-4e88f85dcf30" containerName="console" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.165162 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.168377 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.168617 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.168401 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.172802 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534904-2jxvw"] Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.242417 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fn8h\" (UniqueName: \"kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h\") pod \"auto-csr-approver-29534904-2jxvw\" (UID: \"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c\") " pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.344767 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fn8h\" (UniqueName: \"kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h\") pod \"auto-csr-approver-29534904-2jxvw\" (UID: \"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c\") " pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.383634 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fn8h\" (UniqueName: \"kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h\") pod \"auto-csr-approver-29534904-2jxvw\" (UID: \"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c\") " pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.504456 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:00 crc kubenswrapper[4741]: I0226 08:24:00.832882 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534904-2jxvw"] Feb 26 08:24:01 crc kubenswrapper[4741]: I0226 08:24:01.253089 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" event={"ID":"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c","Type":"ContainerStarted","Data":"56ee249dd8111823a73ebe2fb384f68977e16db8975aecb147021c6c7ef90e6c"} Feb 26 08:24:02 crc kubenswrapper[4741]: I0226 08:24:02.266069 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" event={"ID":"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c","Type":"ContainerStarted","Data":"772393ae344a81ae49cb5da8b2bba006e72fe5aaf9324748af2599f62f5d1514"} Feb 26 08:24:02 crc kubenswrapper[4741]: I0226 08:24:02.299046 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" podStartSLOduration=1.500690066 podStartE2EDuration="2.299000761s" podCreationTimestamp="2026-02-26 08:24:00 +0000 UTC" firstStartedPulling="2026-02-26 08:24:00.845760643 +0000 UTC m=+675.841698080" lastFinishedPulling="2026-02-26 08:24:01.644071348 +0000 UTC m=+676.640008775" observedRunningTime="2026-02-26 08:24:02.286619517 +0000 UTC m=+677.282556974" watchObservedRunningTime="2026-02-26 08:24:02.299000761 +0000 UTC m=+677.294938188" Feb 26 08:24:03 crc kubenswrapper[4741]: I0226 08:24:03.289944 4741 generic.go:334] "Generic (PLEG): container finished" podID="3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" containerID="772393ae344a81ae49cb5da8b2bba006e72fe5aaf9324748af2599f62f5d1514" exitCode=0 Feb 26 08:24:03 crc kubenswrapper[4741]: I0226 08:24:03.290094 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" event={"ID":"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c","Type":"ContainerDied","Data":"772393ae344a81ae49cb5da8b2bba006e72fe5aaf9324748af2599f62f5d1514"} Feb 26 08:24:04 crc kubenswrapper[4741]: I0226 08:24:04.675375 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:04 crc kubenswrapper[4741]: I0226 08:24:04.828211 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fn8h\" (UniqueName: \"kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h\") pod \"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c\" (UID: \"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c\") " Feb 26 08:24:04 crc kubenswrapper[4741]: I0226 08:24:04.835159 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h" (OuterVolumeSpecName: "kube-api-access-7fn8h") pod "3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" (UID: "3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c"). InnerVolumeSpecName "kube-api-access-7fn8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:24:04 crc kubenswrapper[4741]: I0226 08:24:04.930170 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fn8h\" (UniqueName: \"kubernetes.io/projected/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c-kube-api-access-7fn8h\") on node \"crc\" DevicePath \"\"" Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.309010 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" event={"ID":"3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c","Type":"ContainerDied","Data":"56ee249dd8111823a73ebe2fb384f68977e16db8975aecb147021c6c7ef90e6c"} Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.309428 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ee249dd8111823a73ebe2fb384f68977e16db8975aecb147021c6c7ef90e6c" Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.309080 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534904-2jxvw" Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.363257 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534898-rr6lj"] Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.369021 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534898-rr6lj"] Feb 26 08:24:05 crc kubenswrapper[4741]: I0226 08:24:05.802103 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d" path="/var/lib/kubelet/pods/e8c8f9fc-dd53-4233-97f6-ba5b2fdf0a1d/volumes" Feb 26 08:24:10 crc kubenswrapper[4741]: I0226 08:24:10.954909 4741 scope.go:117] "RemoveContainer" containerID="98d61524639d9639c79360f0c5b009b3778fd5d02444d8b6ff49415abea59481" Feb 26 08:25:11 crc kubenswrapper[4741]: I0226 08:25:11.039168 4741 scope.go:117] "RemoveContainer" containerID="dc1d4b761bb2eeac272e256a6a3c51402c86c237fbceb2a42b2bc6cd7e348d83" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.561883 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz"] Feb 26 08:25:27 crc kubenswrapper[4741]: E0226 08:25:27.562834 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" containerName="oc" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.562853 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" containerName="oc" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.563013 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" containerName="oc" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.564677 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.569556 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.591935 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz"] Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.724151 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.724281 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjn9\" (UniqueName: \"kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.724342 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.825566 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.825883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.826143 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.826211 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjn9\" (UniqueName: \"kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.826507 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.857876 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjn9\" (UniqueName: \"kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:27 crc kubenswrapper[4741]: I0226 08:25:27.923597 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:28 crc kubenswrapper[4741]: I0226 08:25:28.198374 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz"] Feb 26 08:25:29 crc kubenswrapper[4741]: I0226 08:25:29.002674 4741 generic.go:334] "Generic (PLEG): container finished" podID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerID="ba630753dc87d60339ab500de5659c99dcfa1fe064da3dcb3d09009c8abdb867" exitCode=0 Feb 26 08:25:29 crc kubenswrapper[4741]: I0226 08:25:29.002751 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" event={"ID":"652b2cb9-4551-45e5-a4b0-6f6720ec0792","Type":"ContainerDied","Data":"ba630753dc87d60339ab500de5659c99dcfa1fe064da3dcb3d09009c8abdb867"} Feb 26 08:25:29 crc kubenswrapper[4741]: I0226 08:25:29.002821 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" event={"ID":"652b2cb9-4551-45e5-a4b0-6f6720ec0792","Type":"ContainerStarted","Data":"b9770def63633b55efe84d2084a397ee425d95ef7d1890a51a91d879baed15fe"} Feb 26 08:25:31 crc kubenswrapper[4741]: I0226 08:25:31.020697 4741 generic.go:334] "Generic (PLEG): container finished" podID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerID="86b2d0a654ba01550f27343d6987920add0252787955c01f406e0aa3eef3a29b" exitCode=0 Feb 26 08:25:31 crc kubenswrapper[4741]: I0226 08:25:31.020882 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" event={"ID":"652b2cb9-4551-45e5-a4b0-6f6720ec0792","Type":"ContainerDied","Data":"86b2d0a654ba01550f27343d6987920add0252787955c01f406e0aa3eef3a29b"} Feb 26 08:25:32 crc kubenswrapper[4741]: I0226 08:25:32.031253 4741 generic.go:334] "Generic (PLEG): container finished" podID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerID="1394b12a06d17b2c0b27c27b408bc18beb86a75013ecda2cccf400e54efac399" exitCode=0 Feb 26 08:25:32 crc kubenswrapper[4741]: I0226 08:25:32.031313 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" event={"ID":"652b2cb9-4551-45e5-a4b0-6f6720ec0792","Type":"ContainerDied","Data":"1394b12a06d17b2c0b27c27b408bc18beb86a75013ecda2cccf400e54efac399"} Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.312415 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.427169 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle\") pod \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.427240 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util\") pod \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.427293 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pjn9\" (UniqueName: \"kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9\") pod \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\" (UID: \"652b2cb9-4551-45e5-a4b0-6f6720ec0792\") " Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.430323 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle" (OuterVolumeSpecName: "bundle") pod "652b2cb9-4551-45e5-a4b0-6f6720ec0792" (UID: "652b2cb9-4551-45e5-a4b0-6f6720ec0792"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.437879 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9" (OuterVolumeSpecName: "kube-api-access-6pjn9") pod "652b2cb9-4551-45e5-a4b0-6f6720ec0792" (UID: "652b2cb9-4551-45e5-a4b0-6f6720ec0792"). InnerVolumeSpecName "kube-api-access-6pjn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.446198 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util" (OuterVolumeSpecName: "util") pod "652b2cb9-4551-45e5-a4b0-6f6720ec0792" (UID: "652b2cb9-4551-45e5-a4b0-6f6720ec0792"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.529651 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pjn9\" (UniqueName: \"kubernetes.io/projected/652b2cb9-4551-45e5-a4b0-6f6720ec0792-kube-api-access-6pjn9\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.529724 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:33 crc kubenswrapper[4741]: I0226 08:25:33.529756 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/652b2cb9-4551-45e5-a4b0-6f6720ec0792-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:34 crc kubenswrapper[4741]: I0226 08:25:34.053498 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" event={"ID":"652b2cb9-4551-45e5-a4b0-6f6720ec0792","Type":"ContainerDied","Data":"b9770def63633b55efe84d2084a397ee425d95ef7d1890a51a91d879baed15fe"} Feb 26 08:25:34 crc kubenswrapper[4741]: I0226 08:25:34.053563 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9770def63633b55efe84d2084a397ee425d95ef7d1890a51a91d879baed15fe" Feb 26 08:25:34 crc kubenswrapper[4741]: I0226 08:25:34.053614 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz" Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.856521 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2w5nl"] Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.857490 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-controller" containerID="cri-o://6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.857566 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="nbdb" containerID="cri-o://f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.857675 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="northd" containerID="cri-o://c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.857725 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.857890 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-acl-logging" containerID="cri-o://e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.858140 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="sbdb" containerID="cri-o://fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.858127 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-node" containerID="cri-o://95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8" gracePeriod=30 Feb 26 08:25:38 crc kubenswrapper[4741]: I0226 08:25:38.943691 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" containerID="cri-o://3ca2b6a31da3bc8a7cce80daf6a680137b42b6140c6bd5152a84d658a4126507" gracePeriod=30 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.095611 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/2.log" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.096230 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/1.log" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.096284 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fd732e7-0e36-485f-b750-856d6869e697" containerID="81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6" exitCode=2 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.096364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerDied","Data":"81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6"} Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.096421 4741 scope.go:117] "RemoveContainer" containerID="8f83e4649adee5352b1520ed1430b7030260c99f799e62efe117c63b21850a10" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.097292 4741 scope.go:117] "RemoveContainer" containerID="81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6" Feb 26 08:25:39 crc kubenswrapper[4741]: E0226 08:25:39.097749 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mzt8d_openshift-multus(3fd732e7-0e36-485f-b750-856d6869e697)\"" pod="openshift-multus/multus-mzt8d" podUID="3fd732e7-0e36-485f-b750-856d6869e697" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.099159 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovnkube-controller/3.log" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.101299 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-acl-logging/0.log" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.101774 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-controller/0.log" Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102310 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="3ca2b6a31da3bc8a7cce80daf6a680137b42b6140c6bd5152a84d658a4126507" exitCode=0 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102340 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb" exitCode=0 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102349 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a" exitCode=143 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102359 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775" exitCode=143 Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102380 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"3ca2b6a31da3bc8a7cce80daf6a680137b42b6140c6bd5152a84d658a4126507"} Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102407 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb"} Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102418 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a"} Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.102426 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775"} Feb 26 08:25:39 crc kubenswrapper[4741]: I0226 08:25:39.139907 4741 scope.go:117] "RemoveContainer" containerID="eb2adbf7b31ebb67cc611a1fe7286327ca645d916697f054b66f3af1f7bf1494" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.112145 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-acl-logging/0.log" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113039 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-controller/0.log" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113438 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a" exitCode=0 Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113469 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1" exitCode=0 Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113480 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0" exitCode=0 Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113490 4741 generic.go:334] "Generic (PLEG): container finished" podID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerID="95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8" exitCode=0 Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113493 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a"} Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1"} Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113559 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0"} Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113577 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8"} Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113588 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" event={"ID":"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1","Type":"ContainerDied","Data":"a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c"} Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.113599 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46f1eb1950e35ea0ef425ed873f67e7d77811df9f16badf75d6f58ea42b664c" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.115665 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/2.log" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.136369 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-acl-logging/0.log" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.137346 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2w5nl_1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/ovn-controller/0.log" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.137975 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251071 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7mqq"] Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251374 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251388 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251398 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251405 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251413 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251419 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251427 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="pull" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251434 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="pull" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251447 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="util" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251454 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="util" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251467 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251474 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251484 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="extract" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251491 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="extract" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251499 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-node" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251505 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-node" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251515 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="nbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251521 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="nbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251533 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kubecfg-setup" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251538 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kubecfg-setup" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251544 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251550 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251557 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-acl-logging" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251564 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-acl-logging" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251575 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="northd" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251582 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="northd" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.251592 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="sbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251598 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="sbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251789 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251803 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-node" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251811 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251818 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251825 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="northd" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251832 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovn-acl-logging" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251839 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251847 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251856 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="sbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251864 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="nbdb" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251873 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251880 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="652b2cb9-4551-45e5-a4b0-6f6720ec0792" containerName="extract" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.251888 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.252008 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.252016 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: E0226 08:25:40.252024 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.252030 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" containerName="ovnkube-controller" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.254417 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.263936 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.263977 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264008 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264036 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264060 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264090 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbcz\" (UniqueName: \"kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264134 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264161 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264189 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264214 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264233 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264255 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264231 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264295 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264305 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264321 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264367 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264339 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264370 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log" (OuterVolumeSpecName: "node-log") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264428 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264457 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264414 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash" (OuterVolumeSpecName: "host-slash") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264431 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket" (OuterVolumeSpecName: "log-socket") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264455 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264483 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264505 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264527 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264555 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\" (UID: \"1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1\") " Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264457 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264709 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264721 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264723 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264742 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264749 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264767 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264778 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.264909 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266589 4741 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266625 4741 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266639 4741 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266653 4741 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266669 4741 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266685 4741 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266700 4741 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266715 4741 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266727 4741 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266738 4741 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266753 4741 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266764 4741 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266776 4741 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266788 4741 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266799 4741 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266810 4741 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.266821 4741 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.283386 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.292462 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz" (OuterVolumeSpecName: "kube-api-access-jzbcz") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "kube-api-access-jzbcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.309781 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" (UID: "1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368431 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-config\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368486 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovn-node-metrics-cert\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368514 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368545 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-log-socket\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368564 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368584 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368608 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-script-lib\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368623 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-node-log\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368832 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-bin\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368850 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-var-lib-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368868 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-slash\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368882 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-etc-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368908 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-env-overrides\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368934 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-systemd-units\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368953 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-netns\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368976 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-netd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.368999 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-systemd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369026 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-ovn\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369045 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdhr\" (UniqueName: \"kubernetes.io/projected/234558da-0d49-4f9f-aec9-7d5a8e63cfef-kube-api-access-8rdhr\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369064 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-kubelet\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369168 4741 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369190 4741 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.369219 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzbcz\" (UniqueName: \"kubernetes.io/projected/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1-kube-api-access-jzbcz\") on node \"crc\" DevicePath \"\"" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470249 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-systemd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470654 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-ovn\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470682 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdhr\" (UniqueName: \"kubernetes.io/projected/234558da-0d49-4f9f-aec9-7d5a8e63cfef-kube-api-access-8rdhr\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470712 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-kubelet\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470412 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-systemd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-ovn\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470742 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-config\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470823 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-kubelet\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470852 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovn-node-metrics-cert\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470880 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470904 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-log-socket\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470928 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470949 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470971 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470977 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-script-lib\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471014 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.470992 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-log-socket\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471047 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-node-log\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471129 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-bin\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471141 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-run-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471197 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-node-log\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-var-lib-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471226 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-bin\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471165 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-var-lib-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471299 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-slash\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471319 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-etc-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471332 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-slash\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471365 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-env-overrides\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471377 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-etc-openvswitch\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471437 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-systemd-units\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471477 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-netns\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471519 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-netd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471821 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-cni-netd\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-systemd-units\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.471866 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/234558da-0d49-4f9f-aec9-7d5a8e63cfef-host-run-netns\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.472363 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-config\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.472467 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-env-overrides\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.473103 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovnkube-script-lib\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.481231 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/234558da-0d49-4f9f-aec9-7d5a8e63cfef-ovn-node-metrics-cert\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.497917 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdhr\" (UniqueName: \"kubernetes.io/projected/234558da-0d49-4f9f-aec9-7d5a8e63cfef-kube-api-access-8rdhr\") pod \"ovnkube-node-t7mqq\" (UID: \"234558da-0d49-4f9f-aec9-7d5a8e63cfef\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:40 crc kubenswrapper[4741]: I0226 08:25:40.617650 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.124663 4741 generic.go:334] "Generic (PLEG): container finished" podID="234558da-0d49-4f9f-aec9-7d5a8e63cfef" containerID="db543e387088e0c3af3d540194cb3ceb426579a447a912c8208974ef7fc69320" exitCode=0 Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.124761 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerDied","Data":"db543e387088e0c3af3d540194cb3ceb426579a447a912c8208974ef7fc69320"} Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.124815 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2w5nl" Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.124822 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"7916a190ee96dbf99df00bfc7fff950a4bb3dbbb5a6008e9edf7d1625555f0d5"} Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.250217 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2w5nl"] Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.264782 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2w5nl"] Feb 26 08:25:41 crc kubenswrapper[4741]: I0226 08:25:41.796328 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1" path="/var/lib/kubelet/pods/1116a7ff-8389-43fe-9f5f-b2b2c23ca9f1/volumes" Feb 26 08:25:42 crc kubenswrapper[4741]: I0226 08:25:42.135442 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"897317a1f6595b117870ae6099c4a2cc5847809d59ff791afe1b89d0e685fe4c"} Feb 26 08:25:42 crc kubenswrapper[4741]: I0226 08:25:42.135841 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"b53b4563fe012bff0c338560de40f5d856405c568f2eebd301953c2bfccad549"} Feb 26 08:25:42 crc kubenswrapper[4741]: I0226 08:25:42.135855 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"9ca60fdd13f5dff35dba713d82fc3dd111e0eecc712eb4d874cdec8fcb3a37d1"} Feb 26 08:25:42 crc kubenswrapper[4741]: I0226 08:25:42.135871 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"8ba3cf115142c4c856d21490b3318e55c8900982691de476e54347a38cb91568"} Feb 26 08:25:42 crc kubenswrapper[4741]: I0226 08:25:42.135883 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"4e3f12990f04f45f6c39392bb58e36ef7564b65f689976e166721401ce79923e"} Feb 26 08:25:43 crc kubenswrapper[4741]: I0226 08:25:43.146807 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"6a8cde2f1fb83b253068424dd6a81ccf275fe06f7c4ca6cd8f7fee29ed548482"} Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.166875 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"ed9837dec1a3edb09b1ca883c428b6064afc4e8d442d09a274097549af948bbd"} Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.652950 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb"] Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.653758 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.659445 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.659545 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.660035 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-kwkxs" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.756312 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4dq\" (UniqueName: \"kubernetes.io/projected/93b9d30b-e6dd-43d5-8599-eee30ab515a5-kube-api-access-fg4dq\") pod \"obo-prometheus-operator-68bc856cb9-mwbdb\" (UID: \"93b9d30b-e6dd-43d5-8599-eee30ab515a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.785478 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9"] Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.794522 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.828548 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-chsvg" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.829880 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.833758 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd"] Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.834721 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.858529 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.858633 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4dq\" (UniqueName: \"kubernetes.io/projected/93b9d30b-e6dd-43d5-8599-eee30ab515a5-kube-api-access-fg4dq\") pod \"obo-prometheus-operator-68bc856cb9-mwbdb\" (UID: \"93b9d30b-e6dd-43d5-8599-eee30ab515a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.858666 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.858688 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.858722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.877186 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.887644 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.904020 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4dq\" (UniqueName: \"kubernetes.io/projected/93b9d30b-e6dd-43d5-8599-eee30ab515a5-kube-api-access-fg4dq\") pod \"obo-prometheus-operator-68bc856cb9-mwbdb\" (UID: \"93b9d30b-e6dd-43d5-8599-eee30ab515a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.960972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.961123 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.961156 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.961194 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.965918 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.966448 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.966782 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec9499c-44d9-45b6-8361-76b7c0e2ed54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd\" (UID: \"fec9499c-44d9-45b6-8361-76b7c0e2ed54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.976300 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-kwkxs" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.982265 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.982988 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c7ecfae-042e-4aad-8b9d-2e6a4284d75a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9\" (UID: \"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.990556 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9vkjt"] Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.991992 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.995186 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-x44dl" Feb 26 08:25:45 crc kubenswrapper[4741]: I0226 08:25:45.995191 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.036856 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(6110bc99aecbecec507ab3d83b3175729769a44979963820834aa4d264f663ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.037447 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(6110bc99aecbecec507ab3d83b3175729769a44979963820834aa4d264f663ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.037479 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(6110bc99aecbecec507ab3d83b3175729769a44979963820834aa4d264f663ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.037559 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(6110bc99aecbecec507ab3d83b3175729769a44979963820834aa4d264f663ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" podUID="93b9d30b-e6dd-43d5-8599-eee30ab515a5" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.061805 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.061849 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9967\" (UniqueName: \"kubernetes.io/projected/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-kube-api-access-g9967\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.100058 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7c8vj"] Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.101014 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.106562 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nzwb9" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.138091 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.160319 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.163824 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.163873 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9967\" (UniqueName: \"kubernetes.io/projected/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-kube-api-access-g9967\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.163909 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st6ff\" (UniqueName: \"kubernetes.io/projected/90d97168-5e93-4e51-b66e-d35fc864211d-kube-api-access-st6ff\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.163968 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d97168-5e93-4e51-b66e-d35fc864211d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.170047 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.180875 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(386cc230d4354a561e7f7fe467624e2b92a18796cba4a978519969ed94015c6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.180975 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(386cc230d4354a561e7f7fe467624e2b92a18796cba4a978519969ed94015c6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.181022 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(386cc230d4354a561e7f7fe467624e2b92a18796cba4a978519969ed94015c6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.181089 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(386cc230d4354a561e7f7fe467624e2b92a18796cba4a978519969ed94015c6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" podUID="5c7ecfae-042e-4aad-8b9d-2e6a4284d75a" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.183920 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9967\" (UniqueName: \"kubernetes.io/projected/480c4db0-8b7a-4ef8-a2e6-c7289a9f21af-kube-api-access-g9967\") pod \"observability-operator-59bdc8b94-9vkjt\" (UID: \"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af\") " pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.195464 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(8c0a64b61eec878cb33e1fd33e2ff77530d46d1b7781276b6eab077dec407134): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.195552 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(8c0a64b61eec878cb33e1fd33e2ff77530d46d1b7781276b6eab077dec407134): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.195579 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(8c0a64b61eec878cb33e1fd33e2ff77530d46d1b7781276b6eab077dec407134): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.195645 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(8c0a64b61eec878cb33e1fd33e2ff77530d46d1b7781276b6eab077dec407134): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" podUID="fec9499c-44d9-45b6-8361-76b7c0e2ed54" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.265850 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d97168-5e93-4e51-b66e-d35fc864211d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.266092 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6ff\" (UniqueName: \"kubernetes.io/projected/90d97168-5e93-4e51-b66e-d35fc864211d-kube-api-access-st6ff\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.266783 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d97168-5e93-4e51-b66e-d35fc864211d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.286733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st6ff\" (UniqueName: \"kubernetes.io/projected/90d97168-5e93-4e51-b66e-d35fc864211d-kube-api-access-st6ff\") pod \"perses-operator-5bf474d74f-7c8vj\" (UID: \"90d97168-5e93-4e51-b66e-d35fc864211d\") " pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.356834 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.386095 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(689bd248aeaefa9e844b80b772a4303bba0847ed214ec5b565906c0e56fe0f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.386282 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(689bd248aeaefa9e844b80b772a4303bba0847ed214ec5b565906c0e56fe0f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.386337 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(689bd248aeaefa9e844b80b772a4303bba0847ed214ec5b565906c0e56fe0f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.386423 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(689bd248aeaefa9e844b80b772a4303bba0847ed214ec5b565906c0e56fe0f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" Feb 26 08:25:46 crc kubenswrapper[4741]: I0226 08:25:46.417513 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.446516 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(6ec63740520e12d3c109b257b6d808ee4fbe46d599e17a59ad9b90d707dc078b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.446665 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(6ec63740520e12d3c109b257b6d808ee4fbe46d599e17a59ad9b90d707dc078b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.446703 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(6ec63740520e12d3c109b257b6d808ee4fbe46d599e17a59ad9b90d707dc078b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:46 crc kubenswrapper[4741]: E0226 08:25:46.446800 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(6ec63740520e12d3c109b257b6d808ee4fbe46d599e17a59ad9b90d707dc078b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.190904 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" event={"ID":"234558da-0d49-4f9f-aec9-7d5a8e63cfef","Type":"ContainerStarted","Data":"9d5d0780d82e73b740e4bfbbaf61959dcb03736b11408c26249a00ae77693b45"} Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.191341 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.191361 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.191375 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.238426 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.256480 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.260282 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" podStartSLOduration=8.260257502 podStartE2EDuration="8.260257502s" podCreationTimestamp="2026-02-26 08:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:25:48.248785967 +0000 UTC m=+783.244723364" watchObservedRunningTime="2026-02-26 08:25:48.260257502 +0000 UTC m=+783.256194889" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.752875 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb"] Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.753051 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.753691 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.757889 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9vkjt"] Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.758493 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.759196 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.774481 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd"] Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.774647 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.775275 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.797373 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9"] Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.798984 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.800892 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.822370 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(cdcf433e1616d491ab33aa741e10f82f240bc28750a4595937a3b5ce9ecf2661): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.822459 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(cdcf433e1616d491ab33aa741e10f82f240bc28750a4595937a3b5ce9ecf2661): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.822488 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(cdcf433e1616d491ab33aa741e10f82f240bc28750a4595937a3b5ce9ecf2661): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.822544 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(cdcf433e1616d491ab33aa741e10f82f240bc28750a4595937a3b5ce9ecf2661): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" podUID="93b9d30b-e6dd-43d5-8599-eee30ab515a5" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.823721 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7c8vj"] Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.823869 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:48 crc kubenswrapper[4741]: I0226 08:25:48.824782 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.833258 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(65871a7f7713055ed9c814ffe4784b27a28302da903df47a3fec4c1e9ea02a36): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.833374 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(65871a7f7713055ed9c814ffe4784b27a28302da903df47a3fec4c1e9ea02a36): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.833422 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(65871a7f7713055ed9c814ffe4784b27a28302da903df47a3fec4c1e9ea02a36): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.833515 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(65871a7f7713055ed9c814ffe4784b27a28302da903df47a3fec4c1e9ea02a36): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.858426 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(9875766bba25c8f67f0ce5d24a0c1abce4ff68a208d3be15ca6add04ec1ebfd3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.858508 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(9875766bba25c8f67f0ce5d24a0c1abce4ff68a208d3be15ca6add04ec1ebfd3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.858537 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(9875766bba25c8f67f0ce5d24a0c1abce4ff68a208d3be15ca6add04ec1ebfd3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.858602 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(9875766bba25c8f67f0ce5d24a0c1abce4ff68a208d3be15ca6add04ec1ebfd3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" podUID="fec9499c-44d9-45b6-8361-76b7c0e2ed54" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.882812 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(2e77e763609f12fededa0f4730052ed087595228580243f4b03a8e0985d025f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.882923 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(2e77e763609f12fededa0f4730052ed087595228580243f4b03a8e0985d025f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.882953 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(2e77e763609f12fededa0f4730052ed087595228580243f4b03a8e0985d025f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.883010 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(2e77e763609f12fededa0f4730052ed087595228580243f4b03a8e0985d025f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.889379 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(c0d8527fca68f3d623dad7336f4fb50037ebaec5ef9acc0811b44d95445657ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.889470 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(c0d8527fca68f3d623dad7336f4fb50037ebaec5ef9acc0811b44d95445657ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.889501 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(c0d8527fca68f3d623dad7336f4fb50037ebaec5ef9acc0811b44d95445657ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:25:48 crc kubenswrapper[4741]: E0226 08:25:48.889562 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(c0d8527fca68f3d623dad7336f4fb50037ebaec5ef9acc0811b44d95445657ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" podUID="5c7ecfae-042e-4aad-8b9d-2e6a4284d75a" Feb 26 08:25:51 crc kubenswrapper[4741]: I0226 08:25:51.789433 4741 scope.go:117] "RemoveContainer" containerID="81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6" Feb 26 08:25:51 crc kubenswrapper[4741]: E0226 08:25:51.790256 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mzt8d_openshift-multus(3fd732e7-0e36-485f-b750-856d6869e697)\"" pod="openshift-multus/multus-mzt8d" podUID="3fd732e7-0e36-485f-b750-856d6869e697" Feb 26 08:25:55 crc kubenswrapper[4741]: I0226 08:25:55.149829 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:25:55 crc kubenswrapper[4741]: I0226 08:25:55.150400 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.123980 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534906-w9g6b"] Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.125302 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.127868 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.128634 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.133381 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534906-w9g6b"] Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.134509 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.195597 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cf22\" (UniqueName: \"kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22\") pod \"auto-csr-approver-29534906-w9g6b\" (UID: \"6205896b-ec19-465c-b910-187543c44ddd\") " pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.297665 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cf22\" (UniqueName: \"kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22\") pod \"auto-csr-approver-29534906-w9g6b\" (UID: \"6205896b-ec19-465c-b910-187543c44ddd\") " pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.334079 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cf22\" (UniqueName: \"kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22\") pod \"auto-csr-approver-29534906-w9g6b\" (UID: \"6205896b-ec19-465c-b910-187543c44ddd\") " pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.441585 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.482030 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(f9d487d52443155c36add4fa84546065fc36d50ed1e4476d071211f9a2ef3a5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.482170 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(f9d487d52443155c36add4fa84546065fc36d50ed1e4476d071211f9a2ef3a5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.482202 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(f9d487d52443155c36add4fa84546065fc36d50ed1e4476d071211f9a2ef3a5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.482273 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29534906-w9g6b_openshift-infra(6205896b-ec19-465c-b910-187543c44ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29534906-w9g6b_openshift-infra(6205896b-ec19-465c-b910-187543c44ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(f9d487d52443155c36add4fa84546065fc36d50ed1e4476d071211f9a2ef3a5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" podUID="6205896b-ec19-465c-b910-187543c44ddd" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.787037 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.787074 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.787123 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.787738 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.788085 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:00 crc kubenswrapper[4741]: I0226 08:26:00.788302 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.874421 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(75b192f3dfdabfe3650a28e260e0d7c1164c112ba3e33305995edf4ef9b3c6ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.874610 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(75b192f3dfdabfe3650a28e260e0d7c1164c112ba3e33305995edf4ef9b3c6ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.874698 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(75b192f3dfdabfe3650a28e260e0d7c1164c112ba3e33305995edf4ef9b3c6ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.874441 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(4599fa3344f00403e2a3e551a02cdc910fdc364c02e1039abc3d3fd126c8fa5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.874888 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators(5c7ecfae-042e-4aad-8b9d-2e6a4284d75a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_openshift-operators_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a_0(75b192f3dfdabfe3650a28e260e0d7c1164c112ba3e33305995edf4ef9b3c6ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" podUID="5c7ecfae-042e-4aad-8b9d-2e6a4284d75a" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.875307 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(4599fa3344f00403e2a3e551a02cdc910fdc364c02e1039abc3d3fd126c8fa5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.875355 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(4599fa3344f00403e2a3e551a02cdc910fdc364c02e1039abc3d3fd126c8fa5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.875429 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9vkjt_openshift-operators(480c4db0-8b7a-4ef8-a2e6-c7289a9f21af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9vkjt_openshift-operators_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af_0(4599fa3344f00403e2a3e551a02cdc910fdc364c02e1039abc3d3fd126c8fa5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.878532 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(0507464a5a7ad4c812a8b492487ac29b9149e7d27266b28c0f5e5691f84d6cb2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.878616 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(0507464a5a7ad4c812a8b492487ac29b9149e7d27266b28c0f5e5691f84d6cb2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.878647 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(0507464a5a7ad4c812a8b492487ac29b9149e7d27266b28c0f5e5691f84d6cb2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:00 crc kubenswrapper[4741]: E0226 08:26:00.878709 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators(fec9499c-44d9-45b6-8361-76b7c0e2ed54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_openshift-operators_fec9499c-44d9-45b6-8361-76b7c0e2ed54_0(0507464a5a7ad4c812a8b492487ac29b9149e7d27266b28c0f5e5691f84d6cb2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" podUID="fec9499c-44d9-45b6-8361-76b7c0e2ed54" Feb 26 08:26:01 crc kubenswrapper[4741]: I0226 08:26:01.277363 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:01 crc kubenswrapper[4741]: I0226 08:26:01.278624 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:01 crc kubenswrapper[4741]: E0226 08:26:01.311736 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(fb222f7756f4ffe19295e884a2a27c9cb533e3b78d5442b4765cbb69bf249458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:01 crc kubenswrapper[4741]: E0226 08:26:01.311913 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(fb222f7756f4ffe19295e884a2a27c9cb533e3b78d5442b4765cbb69bf249458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:01 crc kubenswrapper[4741]: E0226 08:26:01.312003 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(fb222f7756f4ffe19295e884a2a27c9cb533e3b78d5442b4765cbb69bf249458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:01 crc kubenswrapper[4741]: E0226 08:26:01.312135 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29534906-w9g6b_openshift-infra(6205896b-ec19-465c-b910-187543c44ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29534906-w9g6b_openshift-infra(6205896b-ec19-465c-b910-187543c44ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29534906-w9g6b_openshift-infra_6205896b-ec19-465c-b910-187543c44ddd_0(fb222f7756f4ffe19295e884a2a27c9cb533e3b78d5442b4765cbb69bf249458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" podUID="6205896b-ec19-465c-b910-187543c44ddd" Feb 26 08:26:02 crc kubenswrapper[4741]: I0226 08:26:02.790880 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:02 crc kubenswrapper[4741]: I0226 08:26:02.791370 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:02 crc kubenswrapper[4741]: E0226 08:26:02.849507 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(d80bd96a12bb757c002811a8d28419cb1b9a07ec3fa900c064306cc0d30c7b2d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:02 crc kubenswrapper[4741]: E0226 08:26:02.849657 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(d80bd96a12bb757c002811a8d28419cb1b9a07ec3fa900c064306cc0d30c7b2d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:02 crc kubenswrapper[4741]: E0226 08:26:02.849689 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(d80bd96a12bb757c002811a8d28419cb1b9a07ec3fa900c064306cc0d30c7b2d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:02 crc kubenswrapper[4741]: E0226 08:26:02.849773 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7c8vj_openshift-operators(90d97168-5e93-4e51-b66e-d35fc864211d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7c8vj_openshift-operators_90d97168-5e93-4e51-b66e-d35fc864211d_0(d80bd96a12bb757c002811a8d28419cb1b9a07ec3fa900c064306cc0d30c7b2d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" Feb 26 08:26:03 crc kubenswrapper[4741]: I0226 08:26:03.786837 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:03 crc kubenswrapper[4741]: I0226 08:26:03.787570 4741 scope.go:117] "RemoveContainer" containerID="81add57eb7d12481eebd82f7729d18c5d2c5076fc39fa786742ae6a801f185d6" Feb 26 08:26:03 crc kubenswrapper[4741]: I0226 08:26:03.788274 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:03 crc kubenswrapper[4741]: E0226 08:26:03.825720 4741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(2df71fd9e161e029ddc86f8ec069eeedc7507f60d6aeae3b74151e7f4a5fd859): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 08:26:03 crc kubenswrapper[4741]: E0226 08:26:03.825811 4741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(2df71fd9e161e029ddc86f8ec069eeedc7507f60d6aeae3b74151e7f4a5fd859): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:03 crc kubenswrapper[4741]: E0226 08:26:03.825836 4741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(2df71fd9e161e029ddc86f8ec069eeedc7507f60d6aeae3b74151e7f4a5fd859): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:03 crc kubenswrapper[4741]: E0226 08:26:03.825894 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators(93b9d30b-e6dd-43d5-8599-eee30ab515a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mwbdb_openshift-operators_93b9d30b-e6dd-43d5-8599-eee30ab515a5_0(2df71fd9e161e029ddc86f8ec069eeedc7507f60d6aeae3b74151e7f4a5fd859): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" podUID="93b9d30b-e6dd-43d5-8599-eee30ab515a5" Feb 26 08:26:04 crc kubenswrapper[4741]: I0226 08:26:04.299624 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mzt8d_3fd732e7-0e36-485f-b750-856d6869e697/kube-multus/2.log" Feb 26 08:26:04 crc kubenswrapper[4741]: I0226 08:26:04.299717 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mzt8d" event={"ID":"3fd732e7-0e36-485f-b750-856d6869e697","Type":"ContainerStarted","Data":"59d464c8e57d6f376994cec6b779c130a9effe30ab333018ab32a39824633f4f"} Feb 26 08:26:10 crc kubenswrapper[4741]: I0226 08:26:10.666732 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.126876 4741 scope.go:117] "RemoveContainer" containerID="5388800f89a05062dd5e2b3f63cae0ee36123fd5ce0d1d6c32fcb6475525e5f0" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.149960 4741 scope.go:117] "RemoveContainer" containerID="c4ce52528f80d3e4435b73d539d18f1e95452e1601a37b3e8eead5098d4a7ba1" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.167268 4741 scope.go:117] "RemoveContainer" containerID="95e17145147afececb004baac2dc1107d75004ea11058d77bb135ba850459db8" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.183846 4741 scope.go:117] "RemoveContainer" containerID="f33c26b30a8e5d33b64869c464d6105bce9a5320695bd962add61ab16948fd4a" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.199314 4741 scope.go:117] "RemoveContainer" containerID="8f635de1cc89e092168521d7f657832de760f80ef64df3fc8284dd04c554375f" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.225670 4741 scope.go:117] "RemoveContainer" containerID="3ca2b6a31da3bc8a7cce80daf6a680137b42b6140c6bd5152a84d658a4126507" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.255159 4741 scope.go:117] "RemoveContainer" containerID="e2f2aa6146434cc60344c1e90335d17f1d7e8acbbcd90b4a3cef3db4985ece0a" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.271763 4741 scope.go:117] "RemoveContainer" containerID="6be518022d2cb9ce5a32bfc2188d2dab7eabb28d54aa8b2650ad8cfcc14dd775" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.298027 4741 scope.go:117] "RemoveContainer" containerID="fdd3643ae9d2f459b89b31a8dfa9a52474e06951aa453a9b669497cda165c7eb" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.787249 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:11 crc kubenswrapper[4741]: I0226 08:26:11.788333 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.270935 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9"] Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.361137 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" event={"ID":"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a","Type":"ContainerStarted","Data":"0b5915d6dc7bca8e28d3da528833053fd82f05fd2fe5e16b1e3601aeed2c2186"} Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.787224 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.787382 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.788654 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" Feb 26 08:26:12 crc kubenswrapper[4741]: I0226 08:26:12.788949 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:13 crc kubenswrapper[4741]: I0226 08:26:13.208054 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9vkjt"] Feb 26 08:26:13 crc kubenswrapper[4741]: I0226 08:26:13.265922 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd"] Feb 26 08:26:13 crc kubenswrapper[4741]: I0226 08:26:13.370293 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" event={"ID":"fec9499c-44d9-45b6-8361-76b7c0e2ed54","Type":"ContainerStarted","Data":"614f5fc06fe806bca3b02a1e7e1cb0c97b5940e684cc49c34341ca412fea2240"} Feb 26 08:26:13 crc kubenswrapper[4741]: I0226 08:26:13.371589 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" event={"ID":"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af","Type":"ContainerStarted","Data":"40785b4a55dcfefc0b794cb5bb626eee06ca53069fce86757c78c7bcde4fd16b"} Feb 26 08:26:15 crc kubenswrapper[4741]: I0226 08:26:15.786715 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:15 crc kubenswrapper[4741]: I0226 08:26:15.799766 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.159276 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7c8vj"] Feb 26 08:26:16 crc kubenswrapper[4741]: W0226 08:26:16.164553 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d97168_5e93_4e51_b66e_d35fc864211d.slice/crio-0cff647a9e2ede3b5b81b35fcaaaf94f5e30563d3ff99f20a4a8c2cbf13adc4e WatchSource:0}: Error finding container 0cff647a9e2ede3b5b81b35fcaaaf94f5e30563d3ff99f20a4a8c2cbf13adc4e: Status 404 returned error can't find the container with id 0cff647a9e2ede3b5b81b35fcaaaf94f5e30563d3ff99f20a4a8c2cbf13adc4e Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.419437 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" event={"ID":"fec9499c-44d9-45b6-8361-76b7c0e2ed54","Type":"ContainerStarted","Data":"fcbf3806e1f7509c3a0cb7b91796ae5df9919f7240dc3a980815583fce7af656"} Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.449028 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" event={"ID":"5c7ecfae-042e-4aad-8b9d-2e6a4284d75a","Type":"ContainerStarted","Data":"cf182fa7a780e6e048af0bfc7e1ba531a6a4e760fb76cfad957e7c1a35cc773f"} Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.452396 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" event={"ID":"90d97168-5e93-4e51-b66e-d35fc864211d","Type":"ContainerStarted","Data":"0cff647a9e2ede3b5b81b35fcaaaf94f5e30563d3ff99f20a4a8c2cbf13adc4e"} Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.480761 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd" podStartSLOduration=28.842200053 podStartE2EDuration="31.480727819s" podCreationTimestamp="2026-02-26 08:25:45 +0000 UTC" firstStartedPulling="2026-02-26 08:26:13.274886395 +0000 UTC m=+808.270823782" lastFinishedPulling="2026-02-26 08:26:15.913414161 +0000 UTC m=+810.909351548" observedRunningTime="2026-02-26 08:26:16.46170376 +0000 UTC m=+811.457641177" watchObservedRunningTime="2026-02-26 08:26:16.480727819 +0000 UTC m=+811.476665216" Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.497840 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9" podStartSLOduration=27.879068608 podStartE2EDuration="31.497821323s" podCreationTimestamp="2026-02-26 08:25:45 +0000 UTC" firstStartedPulling="2026-02-26 08:26:12.290288082 +0000 UTC m=+807.286225479" lastFinishedPulling="2026-02-26 08:26:15.909040807 +0000 UTC m=+810.904978194" observedRunningTime="2026-02-26 08:26:16.494375956 +0000 UTC m=+811.490313383" watchObservedRunningTime="2026-02-26 08:26:16.497821323 +0000 UTC m=+811.493758710" Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.787088 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:16 crc kubenswrapper[4741]: I0226 08:26:16.787868 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:17 crc kubenswrapper[4741]: I0226 08:26:17.009695 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534906-w9g6b"] Feb 26 08:26:17 crc kubenswrapper[4741]: W0226 08:26:17.013042 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6205896b_ec19_465c_b910_187543c44ddd.slice/crio-34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8 WatchSource:0}: Error finding container 34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8: Status 404 returned error can't find the container with id 34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8 Feb 26 08:26:17 crc kubenswrapper[4741]: I0226 08:26:17.460772 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" event={"ID":"6205896b-ec19-465c-b910-187543c44ddd","Type":"ContainerStarted","Data":"34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8"} Feb 26 08:26:18 crc kubenswrapper[4741]: I0226 08:26:18.786745 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:18 crc kubenswrapper[4741]: I0226 08:26:18.788135 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.264742 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb"] Feb 26 08:26:21 crc kubenswrapper[4741]: W0226 08:26:21.275260 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b9d30b_e6dd_43d5_8599_eee30ab515a5.slice/crio-5c7ff94c8946bd21ae8f6dc2de2dd1de0025b58f5b96c229d79a6f3fb7ab81ca WatchSource:0}: Error finding container 5c7ff94c8946bd21ae8f6dc2de2dd1de0025b58f5b96c229d79a6f3fb7ab81ca: Status 404 returned error can't find the container with id 5c7ff94c8946bd21ae8f6dc2de2dd1de0025b58f5b96c229d79a6f3fb7ab81ca Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.516016 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" event={"ID":"90d97168-5e93-4e51-b66e-d35fc864211d","Type":"ContainerStarted","Data":"eff39fd283c672624445ca49555968fd569a8f4bb14ed66b75a0757e84ef2668"} Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.517651 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.525404 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" event={"ID":"480c4db0-8b7a-4ef8-a2e6-c7289a9f21af","Type":"ContainerStarted","Data":"04a66045a487bc8c16872f09445c3dbc03872fd69b7dc9ee2b01bb651602fa0a"} Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.527386 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.528902 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" event={"ID":"93b9d30b-e6dd-43d5-8599-eee30ab515a5","Type":"ContainerStarted","Data":"5c7ff94c8946bd21ae8f6dc2de2dd1de0025b58f5b96c229d79a6f3fb7ab81ca"} Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.530965 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" event={"ID":"6205896b-ec19-465c-b910-187543c44ddd","Type":"ContainerStarted","Data":"ad49ad0cc0c2e49ab21c6e6e93285f3c64857d43e31f0c825cc5649135553f43"} Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.548561 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podStartSLOduration=30.919319848 podStartE2EDuration="35.548537071s" podCreationTimestamp="2026-02-26 08:25:46 +0000 UTC" firstStartedPulling="2026-02-26 08:26:16.166595516 +0000 UTC m=+811.162532903" lastFinishedPulling="2026-02-26 08:26:20.795812719 +0000 UTC m=+815.791750126" observedRunningTime="2026-02-26 08:26:21.546980277 +0000 UTC m=+816.542917664" watchObservedRunningTime="2026-02-26 08:26:21.548537071 +0000 UTC m=+816.544474458" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.568615 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podStartSLOduration=28.982961002 podStartE2EDuration="36.568592669s" podCreationTimestamp="2026-02-26 08:25:45 +0000 UTC" firstStartedPulling="2026-02-26 08:26:13.217915901 +0000 UTC m=+808.213853288" lastFinishedPulling="2026-02-26 08:26:20.803547528 +0000 UTC m=+815.799484955" observedRunningTime="2026-02-26 08:26:21.563393302 +0000 UTC m=+816.559330689" watchObservedRunningTime="2026-02-26 08:26:21.568592669 +0000 UTC m=+816.564530056" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.581668 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" podStartSLOduration=17.801155209 podStartE2EDuration="21.581648659s" podCreationTimestamp="2026-02-26 08:26:00 +0000 UTC" firstStartedPulling="2026-02-26 08:26:17.016269676 +0000 UTC m=+812.012207063" lastFinishedPulling="2026-02-26 08:26:20.796763096 +0000 UTC m=+815.792700513" observedRunningTime="2026-02-26 08:26:21.576825533 +0000 UTC m=+816.572762920" watchObservedRunningTime="2026-02-26 08:26:21.581648659 +0000 UTC m=+816.577586046" Feb 26 08:26:21 crc kubenswrapper[4741]: I0226 08:26:21.602573 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" Feb 26 08:26:22 crc kubenswrapper[4741]: I0226 08:26:22.541870 4741 generic.go:334] "Generic (PLEG): container finished" podID="6205896b-ec19-465c-b910-187543c44ddd" containerID="ad49ad0cc0c2e49ab21c6e6e93285f3c64857d43e31f0c825cc5649135553f43" exitCode=0 Feb 26 08:26:22 crc kubenswrapper[4741]: I0226 08:26:22.542995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" event={"ID":"6205896b-ec19-465c-b910-187543c44ddd","Type":"ContainerDied","Data":"ad49ad0cc0c2e49ab21c6e6e93285f3c64857d43e31f0c825cc5649135553f43"} Feb 26 08:26:23 crc kubenswrapper[4741]: I0226 08:26:23.808355 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:23 crc kubenswrapper[4741]: I0226 08:26:23.951044 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cf22\" (UniqueName: \"kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22\") pod \"6205896b-ec19-465c-b910-187543c44ddd\" (UID: \"6205896b-ec19-465c-b910-187543c44ddd\") " Feb 26 08:26:23 crc kubenswrapper[4741]: I0226 08:26:23.957495 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22" (OuterVolumeSpecName: "kube-api-access-9cf22") pod "6205896b-ec19-465c-b910-187543c44ddd" (UID: "6205896b-ec19-465c-b910-187543c44ddd"). InnerVolumeSpecName "kube-api-access-9cf22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.053001 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cf22\" (UniqueName: \"kubernetes.io/projected/6205896b-ec19-465c-b910-187543c44ddd-kube-api-access-9cf22\") on node \"crc\" DevicePath \"\"" Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.560033 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.560048 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534906-w9g6b" event={"ID":"6205896b-ec19-465c-b910-187543c44ddd","Type":"ContainerDied","Data":"34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8"} Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.560395 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34de8fb0ecabc8e51a44224faecc43196c1665e40fe1861756dac821a9dfb0b8" Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.562139 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" event={"ID":"93b9d30b-e6dd-43d5-8599-eee30ab515a5","Type":"ContainerStarted","Data":"e0ef694d9d005c451079b31be5129231f11132af881eecd4872cef7fd2f04ebc"} Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.605424 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mwbdb" podStartSLOduration=37.278850349 podStartE2EDuration="39.605388062s" podCreationTimestamp="2026-02-26 08:25:45 +0000 UTC" firstStartedPulling="2026-02-26 08:26:21.281238956 +0000 UTC m=+816.277176343" lastFinishedPulling="2026-02-26 08:26:23.607776669 +0000 UTC m=+818.603714056" observedRunningTime="2026-02-26 08:26:24.581255448 +0000 UTC m=+819.577192845" watchObservedRunningTime="2026-02-26 08:26:24.605388062 +0000 UTC m=+819.601325449" Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.870346 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534900-dbfxr"] Feb 26 08:26:24 crc kubenswrapper[4741]: I0226 08:26:24.877027 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534900-dbfxr"] Feb 26 08:26:25 crc kubenswrapper[4741]: I0226 08:26:25.148813 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:26:25 crc kubenswrapper[4741]: I0226 08:26:25.149354 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:26:25 crc kubenswrapper[4741]: I0226 08:26:25.797212 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba7b413-c1cc-42a1-82d0-7b60ef85568c" path="/var/lib/kubelet/pods/9ba7b413-c1cc-42a1-82d0-7b60ef85568c/volumes" Feb 26 08:26:26 crc kubenswrapper[4741]: I0226 08:26:26.421950 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.385827 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p"] Feb 26 08:26:32 crc kubenswrapper[4741]: E0226 08:26:32.387310 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6205896b-ec19-465c-b910-187543c44ddd" containerName="oc" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.387328 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6205896b-ec19-465c-b910-187543c44ddd" containerName="oc" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.387507 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6205896b-ec19-465c-b910-187543c44ddd" containerName="oc" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.388253 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.393450 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.393531 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.394207 4741 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9g7lc" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.405328 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p"] Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.434509 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-crzq9"] Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.435634 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-crzq9" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.442915 4741 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hvkvl" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.448813 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k969n"] Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.451737 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.454017 4741 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cww65" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.516784 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-crzq9"] Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.532844 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45kf\" (UniqueName: \"kubernetes.io/projected/871ddbfc-c6f2-4eb2-ad70-053df3cdb01b-kube-api-access-h45kf\") pod \"cert-manager-cainjector-cf98fcc89-4zq2p\" (UID: \"871ddbfc-c6f2-4eb2-ad70-053df3cdb01b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.549692 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k969n"] Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.633982 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45kf\" (UniqueName: \"kubernetes.io/projected/871ddbfc-c6f2-4eb2-ad70-053df3cdb01b-kube-api-access-h45kf\") pod \"cert-manager-cainjector-cf98fcc89-4zq2p\" (UID: \"871ddbfc-c6f2-4eb2-ad70-053df3cdb01b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.634038 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897tg\" (UniqueName: \"kubernetes.io/projected/87889307-9479-43d6-b134-f92d0b413d14-kube-api-access-897tg\") pod \"cert-manager-858654f9db-crzq9\" (UID: \"87889307-9479-43d6-b134-f92d0b413d14\") " pod="cert-manager/cert-manager-858654f9db-crzq9" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.634121 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9g7c\" (UniqueName: \"kubernetes.io/projected/b979c6a5-dfb5-43c3-8787-0d4e96bebd64-kube-api-access-q9g7c\") pod \"cert-manager-webhook-687f57d79b-k969n\" (UID: \"b979c6a5-dfb5-43c3-8787-0d4e96bebd64\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.657359 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45kf\" (UniqueName: \"kubernetes.io/projected/871ddbfc-c6f2-4eb2-ad70-053df3cdb01b-kube-api-access-h45kf\") pod \"cert-manager-cainjector-cf98fcc89-4zq2p\" (UID: \"871ddbfc-c6f2-4eb2-ad70-053df3cdb01b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.706749 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.735375 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9g7c\" (UniqueName: \"kubernetes.io/projected/b979c6a5-dfb5-43c3-8787-0d4e96bebd64-kube-api-access-q9g7c\") pod \"cert-manager-webhook-687f57d79b-k969n\" (UID: \"b979c6a5-dfb5-43c3-8787-0d4e96bebd64\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.735805 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-897tg\" (UniqueName: \"kubernetes.io/projected/87889307-9479-43d6-b134-f92d0b413d14-kube-api-access-897tg\") pod \"cert-manager-858654f9db-crzq9\" (UID: \"87889307-9479-43d6-b134-f92d0b413d14\") " pod="cert-manager/cert-manager-858654f9db-crzq9" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.756780 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-897tg\" (UniqueName: \"kubernetes.io/projected/87889307-9479-43d6-b134-f92d0b413d14-kube-api-access-897tg\") pod \"cert-manager-858654f9db-crzq9\" (UID: \"87889307-9479-43d6-b134-f92d0b413d14\") " pod="cert-manager/cert-manager-858654f9db-crzq9" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.756865 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9g7c\" (UniqueName: \"kubernetes.io/projected/b979c6a5-dfb5-43c3-8787-0d4e96bebd64-kube-api-access-q9g7c\") pod \"cert-manager-webhook-687f57d79b-k969n\" (UID: \"b979c6a5-dfb5-43c3-8787-0d4e96bebd64\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:32 crc kubenswrapper[4741]: I0226 08:26:32.772753 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.046287 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k969n"] Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.051453 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-crzq9" Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.151430 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p"] Feb 26 08:26:33 crc kubenswrapper[4741]: W0226 08:26:33.154526 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod871ddbfc_c6f2_4eb2_ad70_053df3cdb01b.slice/crio-de8609c56f68df33f76e85b51556486408c3f99b4bbc5bdb1997e9c374e04b07 WatchSource:0}: Error finding container de8609c56f68df33f76e85b51556486408c3f99b4bbc5bdb1997e9c374e04b07: Status 404 returned error can't find the container with id de8609c56f68df33f76e85b51556486408c3f99b4bbc5bdb1997e9c374e04b07 Feb 26 08:26:33 crc kubenswrapper[4741]: W0226 08:26:33.308263 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87889307_9479_43d6_b134_f92d0b413d14.slice/crio-1bcc94d82299adcdfa367199c1e8c2d72283d24da54d8c1d19f9b23995cd0798 WatchSource:0}: Error finding container 1bcc94d82299adcdfa367199c1e8c2d72283d24da54d8c1d19f9b23995cd0798: Status 404 returned error can't find the container with id 1bcc94d82299adcdfa367199c1e8c2d72283d24da54d8c1d19f9b23995cd0798 Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.309824 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-crzq9"] Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.628055 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" event={"ID":"b979c6a5-dfb5-43c3-8787-0d4e96bebd64","Type":"ContainerStarted","Data":"9d922cc5d95c626a163ca3c06430abfa3fb146139da6fa8f1f1675f767e6feff"} Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.630007 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-crzq9" event={"ID":"87889307-9479-43d6-b134-f92d0b413d14","Type":"ContainerStarted","Data":"1bcc94d82299adcdfa367199c1e8c2d72283d24da54d8c1d19f9b23995cd0798"} Feb 26 08:26:33 crc kubenswrapper[4741]: I0226 08:26:33.631430 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" event={"ID":"871ddbfc-c6f2-4eb2-ad70-053df3cdb01b","Type":"ContainerStarted","Data":"de8609c56f68df33f76e85b51556486408c3f99b4bbc5bdb1997e9c374e04b07"} Feb 26 08:26:36 crc kubenswrapper[4741]: I0226 08:26:36.050183 4741 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.714274 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" event={"ID":"871ddbfc-c6f2-4eb2-ad70-053df3cdb01b","Type":"ContainerStarted","Data":"a7e4553f423d9e988ed5fb75c856c73a98af4c4fed2263fc42d848bc5ed488e9"} Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.716539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" event={"ID":"b979c6a5-dfb5-43c3-8787-0d4e96bebd64","Type":"ContainerStarted","Data":"e3f86755af5c27753db98b94f82d9a40b10e76f06ddd1ef24b78e267299a8b30"} Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.716718 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.718683 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-crzq9" event={"ID":"87889307-9479-43d6-b134-f92d0b413d14","Type":"ContainerStarted","Data":"c3f5752fccf7ab219c3ff8e3086780b6ee88fe8afc03a2d25788a1ea795c1a4a"} Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.748085 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4zq2p" podStartSLOduration=1.667559254 podStartE2EDuration="10.748046095s" podCreationTimestamp="2026-02-26 08:26:32 +0000 UTC" firstStartedPulling="2026-02-26 08:26:33.156605713 +0000 UTC m=+828.152543120" lastFinishedPulling="2026-02-26 08:26:42.237092554 +0000 UTC m=+837.233029961" observedRunningTime="2026-02-26 08:26:42.742278171 +0000 UTC m=+837.738215638" watchObservedRunningTime="2026-02-26 08:26:42.748046095 +0000 UTC m=+837.743983512" Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.775745 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" podStartSLOduration=1.4662391590000001 podStartE2EDuration="10.775707109s" podCreationTimestamp="2026-02-26 08:26:32 +0000 UTC" firstStartedPulling="2026-02-26 08:26:33.060697245 +0000 UTC m=+828.056634632" lastFinishedPulling="2026-02-26 08:26:42.370165185 +0000 UTC m=+837.366102582" observedRunningTime="2026-02-26 08:26:42.766726174 +0000 UTC m=+837.762663591" watchObservedRunningTime="2026-02-26 08:26:42.775707109 +0000 UTC m=+837.771644536" Feb 26 08:26:42 crc kubenswrapper[4741]: I0226 08:26:42.801842 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-crzq9" podStartSLOduration=1.8779638360000002 podStartE2EDuration="10.801809138s" podCreationTimestamp="2026-02-26 08:26:32 +0000 UTC" firstStartedPulling="2026-02-26 08:26:33.31246135 +0000 UTC m=+828.308398737" lastFinishedPulling="2026-02-26 08:26:42.236306612 +0000 UTC m=+837.232244039" observedRunningTime="2026-02-26 08:26:42.79797733 +0000 UTC m=+837.793914717" watchObservedRunningTime="2026-02-26 08:26:42.801809138 +0000 UTC m=+837.797746525" Feb 26 08:26:47 crc kubenswrapper[4741]: I0226 08:26:47.779458 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" Feb 26 08:26:55 crc kubenswrapper[4741]: I0226 08:26:55.149665 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:26:55 crc kubenswrapper[4741]: I0226 08:26:55.150653 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:26:55 crc kubenswrapper[4741]: I0226 08:26:55.150752 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:26:55 crc kubenswrapper[4741]: I0226 08:26:55.151978 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:26:55 crc kubenswrapper[4741]: I0226 08:26:55.152083 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab" gracePeriod=600 Feb 26 08:26:56 crc kubenswrapper[4741]: I0226 08:26:56.846696 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab" exitCode=0 Feb 26 08:26:56 crc kubenswrapper[4741]: I0226 08:26:56.846774 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab"} Feb 26 08:26:56 crc kubenswrapper[4741]: I0226 08:26:56.847666 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd"} Feb 26 08:26:56 crc kubenswrapper[4741]: I0226 08:26:56.847692 4741 scope.go:117] "RemoveContainer" containerID="d275452109ad7f1a766ddc0a89f39adebdb324aac4761d7d12d72614a2f26d16" Feb 26 08:27:11 crc kubenswrapper[4741]: I0226 08:27:11.344456 4741 scope.go:117] "RemoveContainer" containerID="1b7e54b68fbd0f7bdc17500e3e42ece215d1ef3b9c8e6b5fa24cdb261d2bd0eb" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.052513 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7"] Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.054492 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.057705 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.117791 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7"] Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.155997 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.156059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzsq7\" (UniqueName: \"kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.156124 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.234026 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z"] Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.259065 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.259170 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzsq7\" (UniqueName: \"kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.259208 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.259768 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.260017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.269619 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.275237 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z"] Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.284676 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzsq7\" (UniqueName: \"kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.379647 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.462028 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.462132 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.462350 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7b6f\" (UniqueName: \"kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.564365 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7b6f\" (UniqueName: \"kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.564935 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.565000 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.565632 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.565861 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.592312 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7b6f\" (UniqueName: \"kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.624915 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.927954 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7"] Feb 26 08:27:14 crc kubenswrapper[4741]: I0226 08:27:14.970012 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z"] Feb 26 08:27:15 crc kubenswrapper[4741]: I0226 08:27:15.056190 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" event={"ID":"8487ba68-4c63-4d44-a687-1ca047c859d2","Type":"ContainerStarted","Data":"b2f4a99826906a0c1ca0d87e12b734905802ea2275b045e3f89074347aa792c3"} Feb 26 08:27:15 crc kubenswrapper[4741]: I0226 08:27:15.057363 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" event={"ID":"e17317b7-5def-4061-b7f8-a763e30b9868","Type":"ContainerStarted","Data":"611e4d6a973f0c746a32e4af0bc853715769ee305a61ba09382402c287c51a41"} Feb 26 08:27:16 crc kubenswrapper[4741]: I0226 08:27:16.066248 4741 generic.go:334] "Generic (PLEG): container finished" podID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerID="214812748c11a2a7ee6897c1d20dd0542b58970e62fbeb3f16017d65590f1c78" exitCode=0 Feb 26 08:27:16 crc kubenswrapper[4741]: I0226 08:27:16.066328 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" event={"ID":"8487ba68-4c63-4d44-a687-1ca047c859d2","Type":"ContainerDied","Data":"214812748c11a2a7ee6897c1d20dd0542b58970e62fbeb3f16017d65590f1c78"} Feb 26 08:27:16 crc kubenswrapper[4741]: I0226 08:27:16.069497 4741 generic.go:334] "Generic (PLEG): container finished" podID="e17317b7-5def-4061-b7f8-a763e30b9868" containerID="6430dcad5c45a80f686417d17a91f5a5a19b5df22001f15565e6c86863d11f4b" exitCode=0 Feb 26 08:27:16 crc kubenswrapper[4741]: I0226 08:27:16.069536 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" event={"ID":"e17317b7-5def-4061-b7f8-a763e30b9868","Type":"ContainerDied","Data":"6430dcad5c45a80f686417d17a91f5a5a19b5df22001f15565e6c86863d11f4b"} Feb 26 08:27:16 crc kubenswrapper[4741]: I0226 08:27:16.070103 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.802060 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.804479 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.814218 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.854390 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2nq\" (UniqueName: \"kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.854596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.854697 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.957062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2nq\" (UniqueName: \"kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.957436 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.957565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.958147 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.958159 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:17 crc kubenswrapper[4741]: I0226 08:27:17.979216 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2nq\" (UniqueName: \"kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq\") pod \"redhat-operators-dxq6k\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:18 crc kubenswrapper[4741]: I0226 08:27:18.168823 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:18 crc kubenswrapper[4741]: I0226 08:27:18.848233 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:19 crc kubenswrapper[4741]: I0226 08:27:19.098327 4741 generic.go:334] "Generic (PLEG): container finished" podID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerID="7a0c5272aa5690bfc17b0bee2597312368de7ad30f407a9522193bcf538122d8" exitCode=0 Feb 26 08:27:19 crc kubenswrapper[4741]: I0226 08:27:19.098398 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" event={"ID":"8487ba68-4c63-4d44-a687-1ca047c859d2","Type":"ContainerDied","Data":"7a0c5272aa5690bfc17b0bee2597312368de7ad30f407a9522193bcf538122d8"} Feb 26 08:27:19 crc kubenswrapper[4741]: I0226 08:27:19.100907 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerStarted","Data":"3fb6c67b323e28d755cc21d5a13027a4dd169441f17d84144f1fc06c9d01a630"} Feb 26 08:27:19 crc kubenswrapper[4741]: I0226 08:27:19.105380 4741 generic.go:334] "Generic (PLEG): container finished" podID="e17317b7-5def-4061-b7f8-a763e30b9868" containerID="8bd9f1add3743fd243126b99447f0a1710814c3590c0c107928fc81e96843397" exitCode=0 Feb 26 08:27:19 crc kubenswrapper[4741]: I0226 08:27:19.105522 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" event={"ID":"e17317b7-5def-4061-b7f8-a763e30b9868","Type":"ContainerDied","Data":"8bd9f1add3743fd243126b99447f0a1710814c3590c0c107928fc81e96843397"} Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.117378 4741 generic.go:334] "Generic (PLEG): container finished" podID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerID="1b28c6f41f05abf4e75d67667cd9b9bad6299afde49880d75db7afa74ceda497" exitCode=0 Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.117525 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" event={"ID":"8487ba68-4c63-4d44-a687-1ca047c859d2","Type":"ContainerDied","Data":"1b28c6f41f05abf4e75d67667cd9b9bad6299afde49880d75db7afa74ceda497"} Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.120672 4741 generic.go:334] "Generic (PLEG): container finished" podID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerID="e6c9f3706b247cb246711fff88765694503b5fac85ab2a0627fe27dde09de56b" exitCode=0 Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.120769 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerDied","Data":"e6c9f3706b247cb246711fff88765694503b5fac85ab2a0627fe27dde09de56b"} Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.124422 4741 generic.go:334] "Generic (PLEG): container finished" podID="e17317b7-5def-4061-b7f8-a763e30b9868" containerID="b49ad497f656df9d688c3897006f09cff30ff5cbceb03ccaf66633b978851c48" exitCode=0 Feb 26 08:27:20 crc kubenswrapper[4741]: I0226 08:27:20.124587 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" event={"ID":"e17317b7-5def-4061-b7f8-a763e30b9868","Type":"ContainerDied","Data":"b49ad497f656df9d688c3897006f09cff30ff5cbceb03ccaf66633b978851c48"} Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.428708 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.515098 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util\") pod \"e17317b7-5def-4061-b7f8-a763e30b9868\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.515287 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle\") pod \"e17317b7-5def-4061-b7f8-a763e30b9868\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.515325 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzsq7\" (UniqueName: \"kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7\") pod \"e17317b7-5def-4061-b7f8-a763e30b9868\" (UID: \"e17317b7-5def-4061-b7f8-a763e30b9868\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.520158 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle" (OuterVolumeSpecName: "bundle") pod "e17317b7-5def-4061-b7f8-a763e30b9868" (UID: "e17317b7-5def-4061-b7f8-a763e30b9868"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.528626 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util" (OuterVolumeSpecName: "util") pod "e17317b7-5def-4061-b7f8-a763e30b9868" (UID: "e17317b7-5def-4061-b7f8-a763e30b9868"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.531327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7" (OuterVolumeSpecName: "kube-api-access-xzsq7") pod "e17317b7-5def-4061-b7f8-a763e30b9868" (UID: "e17317b7-5def-4061-b7f8-a763e30b9868"). InnerVolumeSpecName "kube-api-access-xzsq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.605692 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.616701 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util\") pod \"8487ba68-4c63-4d44-a687-1ca047c859d2\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.616856 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7b6f\" (UniqueName: \"kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f\") pod \"8487ba68-4c63-4d44-a687-1ca047c859d2\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.616900 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle\") pod \"8487ba68-4c63-4d44-a687-1ca047c859d2\" (UID: \"8487ba68-4c63-4d44-a687-1ca047c859d2\") " Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.617098 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.617133 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzsq7\" (UniqueName: \"kubernetes.io/projected/e17317b7-5def-4061-b7f8-a763e30b9868-kube-api-access-xzsq7\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.617145 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e17317b7-5def-4061-b7f8-a763e30b9868-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.617855 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle" (OuterVolumeSpecName: "bundle") pod "8487ba68-4c63-4d44-a687-1ca047c859d2" (UID: "8487ba68-4c63-4d44-a687-1ca047c859d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.628298 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f" (OuterVolumeSpecName: "kube-api-access-x7b6f") pod "8487ba68-4c63-4d44-a687-1ca047c859d2" (UID: "8487ba68-4c63-4d44-a687-1ca047c859d2"). InnerVolumeSpecName "kube-api-access-x7b6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.638212 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util" (OuterVolumeSpecName: "util") pod "8487ba68-4c63-4d44-a687-1ca047c859d2" (UID: "8487ba68-4c63-4d44-a687-1ca047c859d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.718073 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.718131 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7b6f\" (UniqueName: \"kubernetes.io/projected/8487ba68-4c63-4d44-a687-1ca047c859d2-kube-api-access-x7b6f\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:21 crc kubenswrapper[4741]: I0226 08:27:21.718144 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8487ba68-4c63-4d44-a687-1ca047c859d2-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.143517 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" event={"ID":"e17317b7-5def-4061-b7f8-a763e30b9868","Type":"ContainerDied","Data":"611e4d6a973f0c746a32e4af0bc853715769ee305a61ba09382402c287c51a41"} Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.143589 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611e4d6a973f0c746a32e4af0bc853715769ee305a61ba09382402c287c51a41" Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.143680 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7" Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.150403 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" event={"ID":"8487ba68-4c63-4d44-a687-1ca047c859d2","Type":"ContainerDied","Data":"b2f4a99826906a0c1ca0d87e12b734905802ea2275b045e3f89074347aa792c3"} Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.150476 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f4a99826906a0c1ca0d87e12b734905802ea2275b045e3f89074347aa792c3" Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.150595 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z" Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.153738 4741 generic.go:334] "Generic (PLEG): container finished" podID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerID="34a7c94355b5ddc0690a91ffbc4f36042e0e12aa718101c6a14919cf3bdece6d" exitCode=0 Feb 26 08:27:22 crc kubenswrapper[4741]: I0226 08:27:22.153780 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerDied","Data":"34a7c94355b5ddc0690a91ffbc4f36042e0e12aa718101c6a14919cf3bdece6d"} Feb 26 08:27:23 crc kubenswrapper[4741]: I0226 08:27:23.166397 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerStarted","Data":"32252331e43fb49a8da5f56af5b7516aa02bea9dd0e0e9f0c959f02c5725e1e1"} Feb 26 08:27:23 crc kubenswrapper[4741]: I0226 08:27:23.190999 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxq6k" podStartSLOduration=3.73766678 podStartE2EDuration="6.190971834s" podCreationTimestamp="2026-02-26 08:27:17 +0000 UTC" firstStartedPulling="2026-02-26 08:27:20.124674407 +0000 UTC m=+875.120611794" lastFinishedPulling="2026-02-26 08:27:22.577979461 +0000 UTC m=+877.573916848" observedRunningTime="2026-02-26 08:27:23.187529076 +0000 UTC m=+878.183466463" watchObservedRunningTime="2026-02-26 08:27:23.190971834 +0000 UTC m=+878.186909221" Feb 26 08:27:28 crc kubenswrapper[4741]: I0226 08:27:28.170393 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:28 crc kubenswrapper[4741]: I0226 08:27:28.170900 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:29 crc kubenswrapper[4741]: I0226 08:27:29.222329 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dxq6k" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="registry-server" probeResult="failure" output=< Feb 26 08:27:29 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:27:29 crc kubenswrapper[4741]: > Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.214550 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs"] Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215187 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="pull" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215204 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="pull" Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215216 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="util" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215221 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="util" Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215232 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215238 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215247 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="util" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215253 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="util" Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215262 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215269 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: E0226 08:27:32.215281 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="pull" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215287 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="pull" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215403 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17317b7-5def-4061-b7f8-a763e30b9868" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.215421 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8487ba68-4c63-4d44-a687-1ca047c859d2" containerName="extract" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.216123 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.218038 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.220415 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.220445 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.220425 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.224678 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.227357 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-brbct" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.256149 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs"] Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.306213 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29497\" (UniqueName: \"kubernetes.io/projected/b7349090-2a42-41d0-9bed-5624de634744-kube-api-access-29497\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.306295 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-webhook-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.306344 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.306371 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-apiservice-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.306404 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b7349090-2a42-41d0-9bed-5624de634744-manager-config\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.407455 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b7349090-2a42-41d0-9bed-5624de634744-manager-config\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.407528 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29497\" (UniqueName: \"kubernetes.io/projected/b7349090-2a42-41d0-9bed-5624de634744-kube-api-access-29497\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.407565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-webhook-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.407608 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.407632 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-apiservice-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.410022 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b7349090-2a42-41d0-9bed-5624de634744-manager-config\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.414965 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-apiservice-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.415248 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-webhook-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.415825 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7349090-2a42-41d0-9bed-5624de634744-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.425304 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29497\" (UniqueName: \"kubernetes.io/projected/b7349090-2a42-41d0-9bed-5624de634744-kube-api-access-29497\") pod \"loki-operator-controller-manager-6c89769cfb-mbqvs\" (UID: \"b7349090-2a42-41d0-9bed-5624de634744\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:32 crc kubenswrapper[4741]: I0226 08:27:32.536088 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.174037 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs"] Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.240470 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" event={"ID":"b7349090-2a42-41d0-9bed-5624de634744","Type":"ContainerStarted","Data":"7b5d9adcc8f2a6cd56cfe52f4ffd09891ac5e39c21812d3a9e147ec85cc604b6"} Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.636779 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vqmnv"] Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.638003 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.648623 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-dzft2" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.649547 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.652550 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.657537 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vqmnv"] Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.733205 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnfq\" (UniqueName: \"kubernetes.io/projected/b8ef11d6-01e5-4e65-a38c-1631d3a423ba-kube-api-access-rfnfq\") pod \"cluster-logging-operator-c769fd969-vqmnv\" (UID: \"b8ef11d6-01e5-4e65-a38c-1631d3a423ba\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.835787 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnfq\" (UniqueName: \"kubernetes.io/projected/b8ef11d6-01e5-4e65-a38c-1631d3a423ba-kube-api-access-rfnfq\") pod \"cluster-logging-operator-c769fd969-vqmnv\" (UID: \"b8ef11d6-01e5-4e65-a38c-1631d3a423ba\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.863705 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnfq\" (UniqueName: \"kubernetes.io/projected/b8ef11d6-01e5-4e65-a38c-1631d3a423ba-kube-api-access-rfnfq\") pod \"cluster-logging-operator-c769fd969-vqmnv\" (UID: \"b8ef11d6-01e5-4e65-a38c-1631d3a423ba\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" Feb 26 08:27:33 crc kubenswrapper[4741]: I0226 08:27:33.967843 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" Feb 26 08:27:34 crc kubenswrapper[4741]: I0226 08:27:34.436882 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vqmnv"] Feb 26 08:27:35 crc kubenswrapper[4741]: I0226 08:27:35.260140 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" event={"ID":"b8ef11d6-01e5-4e65-a38c-1631d3a423ba","Type":"ContainerStarted","Data":"979cafe5550f1988ac2a42f6863283074522aad19e6a6ab5937c2512676e6d07"} Feb 26 08:27:38 crc kubenswrapper[4741]: I0226 08:27:38.234162 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:38 crc kubenswrapper[4741]: I0226 08:27:38.296008 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:40 crc kubenswrapper[4741]: I0226 08:27:40.343733 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" event={"ID":"b7349090-2a42-41d0-9bed-5624de634744","Type":"ContainerStarted","Data":"3ef2ddfcd2186585f7aab15753653eccaae62a29cd4a5cc6e2297d6d310d1110"} Feb 26 08:27:41 crc kubenswrapper[4741]: I0226 08:27:41.777506 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:41 crc kubenswrapper[4741]: I0226 08:27:41.778285 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxq6k" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="registry-server" containerID="cri-o://32252331e43fb49a8da5f56af5b7516aa02bea9dd0e0e9f0c959f02c5725e1e1" gracePeriod=2 Feb 26 08:27:42 crc kubenswrapper[4741]: I0226 08:27:42.368796 4741 generic.go:334] "Generic (PLEG): container finished" podID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerID="32252331e43fb49a8da5f56af5b7516aa02bea9dd0e0e9f0c959f02c5725e1e1" exitCode=0 Feb 26 08:27:42 crc kubenswrapper[4741]: I0226 08:27:42.368863 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerDied","Data":"32252331e43fb49a8da5f56af5b7516aa02bea9dd0e0e9f0c959f02c5725e1e1"} Feb 26 08:27:45 crc kubenswrapper[4741]: I0226 08:27:45.922773 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:45 crc kubenswrapper[4741]: I0226 08:27:45.979098 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content\") pod \"e0f92902-b627-4e1f-9882-cf9f396220b8\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " Feb 26 08:27:45 crc kubenswrapper[4741]: I0226 08:27:45.979176 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2nq\" (UniqueName: \"kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq\") pod \"e0f92902-b627-4e1f-9882-cf9f396220b8\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " Feb 26 08:27:45 crc kubenswrapper[4741]: I0226 08:27:45.979222 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities\") pod \"e0f92902-b627-4e1f-9882-cf9f396220b8\" (UID: \"e0f92902-b627-4e1f-9882-cf9f396220b8\") " Feb 26 08:27:45 crc kubenswrapper[4741]: I0226 08:27:45.983814 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities" (OuterVolumeSpecName: "utilities") pod "e0f92902-b627-4e1f-9882-cf9f396220b8" (UID: "e0f92902-b627-4e1f-9882-cf9f396220b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.010327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq" (OuterVolumeSpecName: "kube-api-access-rd2nq") pod "e0f92902-b627-4e1f-9882-cf9f396220b8" (UID: "e0f92902-b627-4e1f-9882-cf9f396220b8"). InnerVolumeSpecName "kube-api-access-rd2nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.084368 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd2nq\" (UniqueName: \"kubernetes.io/projected/e0f92902-b627-4e1f-9882-cf9f396220b8-kube-api-access-rd2nq\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.084397 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.141664 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0f92902-b627-4e1f-9882-cf9f396220b8" (UID: "e0f92902-b627-4e1f-9882-cf9f396220b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.313233 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f92902-b627-4e1f-9882-cf9f396220b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.409600 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" event={"ID":"b8ef11d6-01e5-4e65-a38c-1631d3a423ba","Type":"ContainerStarted","Data":"eeef7bfb22fa16647dd16934787d6e220327d19b64b2b1f107dc36f4c308e131"} Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.415436 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxq6k" event={"ID":"e0f92902-b627-4e1f-9882-cf9f396220b8","Type":"ContainerDied","Data":"3fb6c67b323e28d755cc21d5a13027a4dd169441f17d84144f1fc06c9d01a630"} Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.415505 4741 scope.go:117] "RemoveContainer" containerID="32252331e43fb49a8da5f56af5b7516aa02bea9dd0e0e9f0c959f02c5725e1e1" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.415668 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxq6k" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.438833 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-vqmnv" podStartSLOduration=1.9234437450000001 podStartE2EDuration="13.438807096s" podCreationTimestamp="2026-02-26 08:27:33 +0000 UTC" firstStartedPulling="2026-02-26 08:27:34.445641227 +0000 UTC m=+889.441578614" lastFinishedPulling="2026-02-26 08:27:45.961004578 +0000 UTC m=+900.956941965" observedRunningTime="2026-02-26 08:27:46.431710075 +0000 UTC m=+901.427647462" watchObservedRunningTime="2026-02-26 08:27:46.438807096 +0000 UTC m=+901.434744483" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.479951 4741 scope.go:117] "RemoveContainer" containerID="34a7c94355b5ddc0690a91ffbc4f36042e0e12aa718101c6a14919cf3bdece6d" Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.521495 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.532938 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxq6k"] Feb 26 08:27:46 crc kubenswrapper[4741]: I0226 08:27:46.533242 4741 scope.go:117] "RemoveContainer" containerID="e6c9f3706b247cb246711fff88765694503b5fac85ab2a0627fe27dde09de56b" Feb 26 08:27:46 crc kubenswrapper[4741]: E0226 08:27:46.550210 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f92902_b627_4e1f_9882_cf9f396220b8.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:27:47 crc kubenswrapper[4741]: I0226 08:27:47.803544 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" path="/var/lib/kubelet/pods/e0f92902-b627-4e1f-9882-cf9f396220b8/volumes" Feb 26 08:27:55 crc kubenswrapper[4741]: I0226 08:27:55.531071 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" event={"ID":"b7349090-2a42-41d0-9bed-5624de634744","Type":"ContainerStarted","Data":"e36429d259bada1f561eb92ca8391cdb4f05e1bf201e759b1fe3797bb61975c8"} Feb 26 08:27:55 crc kubenswrapper[4741]: I0226 08:27:55.532712 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:55 crc kubenswrapper[4741]: I0226 08:27:55.540474 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 08:27:55 crc kubenswrapper[4741]: I0226 08:27:55.574575 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" podStartSLOduration=2.249236235 podStartE2EDuration="23.574533117s" podCreationTimestamp="2026-02-26 08:27:32 +0000 UTC" firstStartedPulling="2026-02-26 08:27:33.192376466 +0000 UTC m=+888.188313853" lastFinishedPulling="2026-02-26 08:27:54.517673348 +0000 UTC m=+909.513610735" observedRunningTime="2026-02-26 08:27:55.564639046 +0000 UTC m=+910.560576533" watchObservedRunningTime="2026-02-26 08:27:55.574533117 +0000 UTC m=+910.570470554" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.486931 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 26 08:27:59 crc kubenswrapper[4741]: E0226 08:27:59.487600 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="extract-utilities" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.487616 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="extract-utilities" Feb 26 08:27:59 crc kubenswrapper[4741]: E0226 08:27:59.487642 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="extract-content" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.487648 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="extract-content" Feb 26 08:27:59 crc kubenswrapper[4741]: E0226 08:27:59.487657 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="registry-server" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.487665 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="registry-server" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.487799 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f92902-b627-4e1f-9882-cf9f396220b8" containerName="registry-server" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.488450 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.490567 4741 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-tv4t8" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.493189 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.494271 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.504042 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.654400 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6vn\" (UniqueName: \"kubernetes.io/projected/0f4943af-0f99-4324-966c-2ce9a7d167dc-kube-api-access-2m6vn\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.654487 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.756502 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6vn\" (UniqueName: \"kubernetes.io/projected/0f4943af-0f99-4324-966c-2ce9a7d167dc-kube-api-access-2m6vn\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.756624 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.770657 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.770721 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0795553d2c3b3b4c3acf7447d0e42c48137d996b7152ce8a5cd245138e159b64/globalmount\"" pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.787800 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6vn\" (UniqueName: \"kubernetes.io/projected/0f4943af-0f99-4324-966c-2ce9a7d167dc-kube-api-access-2m6vn\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:27:59 crc kubenswrapper[4741]: I0226 08:27:59.916812 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07c5e12-4872-4a2f-ba49-a8302766b8fe\") pod \"minio\" (UID: \"0f4943af-0f99-4324-966c-2ce9a7d167dc\") " pod="minio-dev/minio" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.135133 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534908-4pvhz"] Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.136274 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.141012 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.141273 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.141325 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.147849 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534908-4pvhz"] Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.149279 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.265594 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8nnn\" (UniqueName: \"kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn\") pod \"auto-csr-approver-29534908-4pvhz\" (UID: \"d0528dad-89da-4e89-a788-3d5294df861e\") " pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.367677 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8nnn\" (UniqueName: \"kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn\") pod \"auto-csr-approver-29534908-4pvhz\" (UID: \"d0528dad-89da-4e89-a788-3d5294df861e\") " pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.388465 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8nnn\" (UniqueName: \"kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn\") pod \"auto-csr-approver-29534908-4pvhz\" (UID: \"d0528dad-89da-4e89-a788-3d5294df861e\") " pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.454310 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.595056 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 08:28:00 crc kubenswrapper[4741]: I0226 08:28:00.702490 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534908-4pvhz"] Feb 26 08:28:00 crc kubenswrapper[4741]: W0226 08:28:00.710406 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0528dad_89da_4e89_a788_3d5294df861e.slice/crio-d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e WatchSource:0}: Error finding container d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e: Status 404 returned error can't find the container with id d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e Feb 26 08:28:01 crc kubenswrapper[4741]: I0226 08:28:01.586618 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" event={"ID":"d0528dad-89da-4e89-a788-3d5294df861e","Type":"ContainerStarted","Data":"d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e"} Feb 26 08:28:01 crc kubenswrapper[4741]: I0226 08:28:01.589150 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0f4943af-0f99-4324-966c-2ce9a7d167dc","Type":"ContainerStarted","Data":"beb29ea5088e36bbd12475c81528e4d6e1bb887686a1eb68015cd1481e19df4f"} Feb 26 08:28:04 crc kubenswrapper[4741]: I0226 08:28:04.618946 4741 generic.go:334] "Generic (PLEG): container finished" podID="d0528dad-89da-4e89-a788-3d5294df861e" containerID="ea38f5b32a76aca6f0722006d3fa3be65f7e356e6c7df19b53d8a42990ca47c8" exitCode=0 Feb 26 08:28:04 crc kubenswrapper[4741]: I0226 08:28:04.619715 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" event={"ID":"d0528dad-89da-4e89-a788-3d5294df861e","Type":"ContainerDied","Data":"ea38f5b32a76aca6f0722006d3fa3be65f7e356e6c7df19b53d8a42990ca47c8"} Feb 26 08:28:05 crc kubenswrapper[4741]: I0226 08:28:05.639844 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0f4943af-0f99-4324-966c-2ce9a7d167dc","Type":"ContainerStarted","Data":"e65caf1101f5fa4e537757f3aa4f52e2b36f62ad3b512b0dcc6848297dd0a967"} Feb 26 08:28:05 crc kubenswrapper[4741]: I0226 08:28:05.665475 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.481987628 podStartE2EDuration="8.66544894s" podCreationTimestamp="2026-02-26 08:27:57 +0000 UTC" firstStartedPulling="2026-02-26 08:28:00.605778655 +0000 UTC m=+915.601716042" lastFinishedPulling="2026-02-26 08:28:04.789239957 +0000 UTC m=+919.785177354" observedRunningTime="2026-02-26 08:28:05.662158636 +0000 UTC m=+920.658096053" watchObservedRunningTime="2026-02-26 08:28:05.66544894 +0000 UTC m=+920.661386327" Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.125397 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.278559 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8nnn\" (UniqueName: \"kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn\") pod \"d0528dad-89da-4e89-a788-3d5294df861e\" (UID: \"d0528dad-89da-4e89-a788-3d5294df861e\") " Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.297380 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn" (OuterVolumeSpecName: "kube-api-access-x8nnn") pod "d0528dad-89da-4e89-a788-3d5294df861e" (UID: "d0528dad-89da-4e89-a788-3d5294df861e"). InnerVolumeSpecName "kube-api-access-x8nnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.380525 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8nnn\" (UniqueName: \"kubernetes.io/projected/d0528dad-89da-4e89-a788-3d5294df861e-kube-api-access-x8nnn\") on node \"crc\" DevicePath \"\"" Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.650014 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.650000 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534908-4pvhz" event={"ID":"d0528dad-89da-4e89-a788-3d5294df861e","Type":"ContainerDied","Data":"d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e"} Feb 26 08:28:06 crc kubenswrapper[4741]: I0226 08:28:06.650069 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c1948ae3ebb74363071b1b0c3f42d0e44ed18f52caa9951bdfcc2f88772e7e" Feb 26 08:28:07 crc kubenswrapper[4741]: I0226 08:28:07.195817 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534902-k8qlb"] Feb 26 08:28:07 crc kubenswrapper[4741]: I0226 08:28:07.200789 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534902-k8qlb"] Feb 26 08:28:08 crc kubenswrapper[4741]: I0226 08:28:08.075994 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc0b141-0dab-4fa4-923b-af343e6ecb35" path="/var/lib/kubelet/pods/1fc0b141-0dab-4fa4-923b-af343e6ecb35/volumes" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.313280 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r"] Feb 26 08:28:10 crc kubenswrapper[4741]: E0226 08:28:10.313597 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0528dad-89da-4e89-a788-3d5294df861e" containerName="oc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.313610 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0528dad-89da-4e89-a788-3d5294df861e" containerName="oc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.313730 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0528dad-89da-4e89-a788-3d5294df861e" containerName="oc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.314265 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.318038 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.318328 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.318967 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.320448 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-4jsnl" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.321876 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.331962 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.351320 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.351366 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.351398 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.351429 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7dx\" (UniqueName: \"kubernetes.io/projected/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-kube-api-access-6p7dx\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.351492 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-config\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.453167 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.453225 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.453277 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.453303 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p7dx\" (UniqueName: \"kubernetes.io/projected/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-kube-api-access-6p7dx\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.453373 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-config\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.454352 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.454593 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-config\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.472355 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.481078 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.481958 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p7dx\" (UniqueName: \"kubernetes.io/projected/091900f2-d6cc-4fbb-8b1b-f4216f868a9c-kube-api-access-6p7dx\") pod \"logging-loki-distributor-5d5548c9f5-fvl8r\" (UID: \"091900f2-d6cc-4fbb-8b1b-f4216f868a9c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.497050 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.498210 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.512714 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.512980 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.518841 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.522719 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554455 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lrt\" (UniqueName: \"kubernetes.io/projected/41e6c349-1fc0-4972-a080-55bb785a4bf7-kube-api-access-22lrt\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554511 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-config\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554560 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554606 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554644 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.554669 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.590688 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.591599 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.595962 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.596341 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.615919 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.652963 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659608 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659697 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659748 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659797 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659833 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659877 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lrt\" (UniqueName: \"kubernetes.io/projected/41e6c349-1fc0-4972-a080-55bb785a4bf7-kube-api-access-22lrt\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659905 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.659935 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27g49\" (UniqueName: \"kubernetes.io/projected/4896602d-060e-4777-957f-ff83ce8e812f-kube-api-access-27g49\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.663234 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-config\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.663482 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-config\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.663526 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.667942 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.670658 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.694343 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e6c349-1fc0-4972-a080-55bb785a4bf7-config\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.695164 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.696901 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/41e6c349-1fc0-4972-a080-55bb785a4bf7-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.703405 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lrt\" (UniqueName: \"kubernetes.io/projected/41e6c349-1fc0-4972-a080-55bb785a4bf7-kube-api-access-22lrt\") pod \"logging-loki-querier-76bf7b6d45-kvhl6\" (UID: \"41e6c349-1fc0-4972-a080-55bb785a4bf7\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.767501 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.767576 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27g49\" (UniqueName: \"kubernetes.io/projected/4896602d-060e-4777-957f-ff83ce8e812f-kube-api-access-27g49\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.767658 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-config\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.767749 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.767794 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.781734 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.782874 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4896602d-060e-4777-957f-ff83ce8e812f-config\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.794850 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.803244 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4896602d-060e-4777-957f-ff83ce8e812f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.858757 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.866408 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-jqlhm"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.868179 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27g49\" (UniqueName: \"kubernetes.io/projected/4896602d-060e-4777-957f-ff83ce8e812f-kube-api-access-27g49\") pod \"logging-loki-query-frontend-6d6859c548-wjt7d\" (UID: \"4896602d-060e-4777-957f-ff83ce8e812f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.871428 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.892643 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.892707 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.892849 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.892880 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-plq6m" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.893656 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.893812 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.893902 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-jqlhm"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.910362 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-qjtwt"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.910796 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.911846 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.923962 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-qjtwt"] Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.993761 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994522 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994567 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-rbac\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994599 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994646 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6d7\" (UniqueName: \"kubernetes.io/projected/aad6cae3-3b9d-4d9e-8549-55da6e10901d-kube-api-access-8h6d7\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994692 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tls-secret\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994744 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994770 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tenants\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994802 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994829 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgg8\" (UniqueName: \"kubernetes.io/projected/b029b8c8-35eb-4509-a29a-9ada4434b899-kube-api-access-9qgg8\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994863 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994884 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tenants\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.994930 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-rbac\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:10 crc kubenswrapper[4741]: I0226 08:28:10.995510 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tls-secret\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105201 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105258 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-rbac\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105296 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tls-secret\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105329 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105363 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105388 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-rbac\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105407 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105437 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6d7\" (UniqueName: \"kubernetes.io/projected/aad6cae3-3b9d-4d9e-8549-55da6e10901d-kube-api-access-8h6d7\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105458 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tls-secret\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105491 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105510 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105557 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tenants\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105586 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105609 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgg8\" (UniqueName: \"kubernetes.io/projected/b029b8c8-35eb-4509-a29a-9ada4434b899-kube-api-access-9qgg8\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105644 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.105670 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tenants\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.108780 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.117156 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tenants\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.118068 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.119623 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-rbac\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.120174 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.120579 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.122016 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.122417 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b029b8c8-35eb-4509-a29a-9ada4434b899-rbac\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.125533 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aad6cae3-3b9d-4d9e-8549-55da6e10901d-lokistack-gateway\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.128535 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tenants\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.132806 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-tls-secret\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.133319 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b029b8c8-35eb-4509-a29a-9ada4434b899-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.133768 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.143017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aad6cae3-3b9d-4d9e-8549-55da6e10901d-tls-secret\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.144481 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6d7\" (UniqueName: \"kubernetes.io/projected/aad6cae3-3b9d-4d9e-8549-55da6e10901d-kube-api-access-8h6d7\") pod \"logging-loki-gateway-7bbb966984-qjtwt\" (UID: \"aad6cae3-3b9d-4d9e-8549-55da6e10901d\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.145019 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgg8\" (UniqueName: \"kubernetes.io/projected/b029b8c8-35eb-4509-a29a-9ada4434b899-kube-api-access-9qgg8\") pod \"logging-loki-gateway-7bbb966984-jqlhm\" (UID: \"b029b8c8-35eb-4509-a29a-9ada4434b899\") " pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.211984 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.273447 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.317455 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.447270 4741 scope.go:117] "RemoveContainer" containerID="cdc25ff52f1c1423806a637bd99f80e8abefaa09fbd4f8f97663ef4b1a67d829" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.504275 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.505756 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.510979 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.511705 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.511990 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.527064 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.562027 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.580550 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.592398 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.596931 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.599602 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.625854 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.625969 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626031 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-config\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626072 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626102 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626145 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.626194 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4c95\" (UniqueName: \"kubernetes.io/projected/5e9394a8-a585-40dc-8178-539b51408421-kube-api-access-z4c95\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.633267 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.671971 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.673523 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.679995 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.680268 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.684564 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.710299 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" event={"ID":"4896602d-060e-4777-957f-ff83ce8e812f","Type":"ContainerStarted","Data":"f144f3ba576629c67a66f668cea8a616767cd60d6d9a1e679fed07466ac10e92"} Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.711639 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" event={"ID":"41e6c349-1fc0-4972-a080-55bb785a4bf7","Type":"ContainerStarted","Data":"fdbaa55d2bdcb7c2e7b51166edf3c4b9d99b72180f0f515c4d703edb81ee9a14"} Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.712669 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" event={"ID":"091900f2-d6cc-4fbb-8b1b-f4216f868a9c","Type":"ContainerStarted","Data":"7a5d8439ea9f1131dccdffb142091c7ea2746fc2827f92c619abea14d0b74aab"} Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.728859 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.728979 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729025 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729049 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-config\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729097 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729148 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729179 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729217 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729249 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729286 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729308 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729338 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4c95\" (UniqueName: \"kubernetes.io/projected/5e9394a8-a585-40dc-8178-539b51408421-kube-api-access-z4c95\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729377 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729402 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729444 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqb4\" (UniqueName: \"kubernetes.io/projected/88cf7afe-52dd-437f-9739-e4f112fef5e8-kube-api-access-7xqb4\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729484 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvckh\" (UniqueName: \"kubernetes.io/projected/be2b1208-3e86-448e-beeb-86c6d953097d-kube-api-access-hvckh\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729561 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729587 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729631 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729655 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-config\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729690 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.729710 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.730968 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-config\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.731917 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.735393 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.735445 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4385331b50cb13e4e1c996e03493d8cf868cd6e9ce43e76b065d165863f88202/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.735393 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.735537 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/897c75670238b9d1ead72776f0b1ea12d5488251c11e513e26f780282839f475/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.736710 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.742174 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.742212 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5e9394a8-a585-40dc-8178-539b51408421-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.755240 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4c95\" (UniqueName: \"kubernetes.io/projected/5e9394a8-a585-40dc-8178-539b51408421-kube-api-access-z4c95\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.775982 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdb35603-8302-4e1d-8ed3-c1881e0afb8c\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.787283 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee69c683-d7dc-4728-9744-3ca8faba19ba\") pod \"logging-loki-ingester-0\" (UID: \"5e9394a8-a585-40dc-8178-539b51408421\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832095 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832208 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832242 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-config\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832262 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832293 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832332 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832358 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832382 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832409 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832443 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832473 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832496 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832523 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xqb4\" (UniqueName: \"kubernetes.io/projected/88cf7afe-52dd-437f-9739-e4f112fef5e8-kube-api-access-7xqb4\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.832550 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvckh\" (UniqueName: \"kubernetes.io/projected/be2b1208-3e86-448e-beeb-86c6d953097d-kube-api-access-hvckh\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.833860 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-config\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.835270 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.836041 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.836695 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.837371 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.837845 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2b1208-3e86-448e-beeb-86c6d953097d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.839588 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.839622 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb5769a94a16d7ca60033d1a0cd3415d06b6eb4d3cca65dd03a912625b79a67e/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.840228 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.841080 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.841982 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ca82ed053f6db6ee539ca878ea4b88a0ba015e7a838e24b1326bdf6b8e86f68/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.841710 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.842286 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/88cf7afe-52dd-437f-9739-e4f112fef5e8-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.843977 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/be2b1208-3e86-448e-beeb-86c6d953097d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.853049 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvckh\" (UniqueName: \"kubernetes.io/projected/be2b1208-3e86-448e-beeb-86c6d953097d-kube-api-access-hvckh\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.857377 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xqb4\" (UniqueName: \"kubernetes.io/projected/88cf7afe-52dd-437f-9739-e4f112fef5e8-kube-api-access-7xqb4\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.857984 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.869474 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c77188b-c680-4368-aae1-fd7aa00f07c4\") pod \"logging-loki-index-gateway-0\" (UID: \"be2b1208-3e86-448e-beeb-86c6d953097d\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.887520 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc93a735-0cf8-47eb-9bf2-a3246c8a254a\") pod \"logging-loki-compactor-0\" (UID: \"88cf7afe-52dd-437f-9739-e4f112fef5e8\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.895461 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-qjtwt"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.946353 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.980583 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7bbb966984-jqlhm"] Feb 26 08:28:11 crc kubenswrapper[4741]: I0226 08:28:11.992959 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.486657 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.610223 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.625613 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 08:28:12 crc kubenswrapper[4741]: W0226 08:28:12.628257 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe2b1208_3e86_448e_beeb_86c6d953097d.slice/crio-3318f5492eba70384569e18ae40d01bca9309557e7528c927540931a175ba5fd WatchSource:0}: Error finding container 3318f5492eba70384569e18ae40d01bca9309557e7528c927540931a175ba5fd: Status 404 returned error can't find the container with id 3318f5492eba70384569e18ae40d01bca9309557e7528c927540931a175ba5fd Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.721240 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5e9394a8-a585-40dc-8178-539b51408421","Type":"ContainerStarted","Data":"f9e798242200510abb73f7df2e56a0647b852504315232861b40e5b9ffdc4be1"} Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.722472 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" event={"ID":"b029b8c8-35eb-4509-a29a-9ada4434b899","Type":"ContainerStarted","Data":"50d900e13b26aee02c5f66476e840ef2e611c4d5d145780e83be9eed2bd9c498"} Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.724307 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"88cf7afe-52dd-437f-9739-e4f112fef5e8","Type":"ContainerStarted","Data":"3a9516f47fe7a2505e755488fc75fbeb0b8b21c24344b4ee0aa4ff21faaff5c5"} Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.725554 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" event={"ID":"aad6cae3-3b9d-4d9e-8549-55da6e10901d","Type":"ContainerStarted","Data":"242fd7354ca572b7da1f0faabfc764d9a0599652523dfa1d36180b1c3475e938"} Feb 26 08:28:12 crc kubenswrapper[4741]: I0226 08:28:12.726910 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"be2b1208-3e86-448e-beeb-86c6d953097d","Type":"ContainerStarted","Data":"3318f5492eba70384569e18ae40d01bca9309557e7528c927540931a175ba5fd"} Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.819821 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" event={"ID":"4896602d-060e-4777-957f-ff83ce8e812f","Type":"ContainerStarted","Data":"684ab4b8d4419dd7823b3b5c57d1406e9aece1733b41cfdccf58ac07245b1410"} Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.820599 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.821508 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5e9394a8-a585-40dc-8178-539b51408421","Type":"ContainerStarted","Data":"ca26ebb47a814b172d8ddfdb902c7196a060eebd533fa1fbed31fa22224aff24"} Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.821604 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.822804 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"88cf7afe-52dd-437f-9739-e4f112fef5e8","Type":"ContainerStarted","Data":"af225e65fc198d441426bdfd5d2d3b936b115f6b74dbef2c1c0861c94e62d328"} Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.822916 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.824767 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" event={"ID":"091900f2-d6cc-4fbb-8b1b-f4216f868a9c","Type":"ContainerStarted","Data":"a6c3820bb2e59cee67094f74ab180198fd0ca24a84e18ee7ca6d95e3bba27e58"} Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.825190 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.843349 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" podStartSLOduration=2.6088649090000002 podStartE2EDuration="10.843330643s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:11.614332278 +0000 UTC m=+926.610269665" lastFinishedPulling="2026-02-26 08:28:19.848798012 +0000 UTC m=+934.844735399" observedRunningTime="2026-02-26 08:28:20.839900705 +0000 UTC m=+935.835838122" watchObservedRunningTime="2026-02-26 08:28:20.843330643 +0000 UTC m=+935.839268030" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.861421 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.411911446 podStartE2EDuration="10.861387895s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:12.494943956 +0000 UTC m=+927.490881343" lastFinishedPulling="2026-02-26 08:28:19.944420405 +0000 UTC m=+934.940357792" observedRunningTime="2026-02-26 08:28:20.857293089 +0000 UTC m=+935.853230476" watchObservedRunningTime="2026-02-26 08:28:20.861387895 +0000 UTC m=+935.857325282" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.876481 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" podStartSLOduration=2.570118139 podStartE2EDuration="10.876456133s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:11.342826073 +0000 UTC m=+926.338763460" lastFinishedPulling="2026-02-26 08:28:19.649164057 +0000 UTC m=+934.645101454" observedRunningTime="2026-02-26 08:28:20.874672982 +0000 UTC m=+935.870610389" watchObservedRunningTime="2026-02-26 08:28:20.876456133 +0000 UTC m=+935.872393520" Feb 26 08:28:20 crc kubenswrapper[4741]: I0226 08:28:20.905163 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.547187396 podStartE2EDuration="10.905140537s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:12.619034108 +0000 UTC m=+927.614971495" lastFinishedPulling="2026-02-26 08:28:19.976987249 +0000 UTC m=+934.972924636" observedRunningTime="2026-02-26 08:28:20.90068441 +0000 UTC m=+935.896621797" watchObservedRunningTime="2026-02-26 08:28:20.905140537 +0000 UTC m=+935.901078034" Feb 26 08:28:21 crc kubenswrapper[4741]: I0226 08:28:21.834791 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" event={"ID":"41e6c349-1fc0-4972-a080-55bb785a4bf7","Type":"ContainerStarted","Data":"0dc7d7199f6389216cd83a1822c52cfe2a77bb6bee5c0409a4ea6bc3a2f01d54"} Feb 26 08:28:22 crc kubenswrapper[4741]: I0226 08:28:22.850244 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"be2b1208-3e86-448e-beeb-86c6d953097d","Type":"ContainerStarted","Data":"21bdf234cdd4db91359fe671003b14f744b6d59371f01f3ed05cdc4bc71ba99c"} Feb 26 08:28:22 crc kubenswrapper[4741]: I0226 08:28:22.851444 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:22 crc kubenswrapper[4741]: I0226 08:28:22.851621 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:22 crc kubenswrapper[4741]: I0226 08:28:22.884324 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" podStartSLOduration=3.220544916 podStartE2EDuration="12.884292808s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:11.614198244 +0000 UTC m=+926.610135631" lastFinishedPulling="2026-02-26 08:28:21.277946106 +0000 UTC m=+936.273883523" observedRunningTime="2026-02-26 08:28:22.869846178 +0000 UTC m=+937.865783565" watchObservedRunningTime="2026-02-26 08:28:22.884292808 +0000 UTC m=+937.880230195" Feb 26 08:28:22 crc kubenswrapper[4741]: I0226 08:28:22.909854 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=4.289630174 podStartE2EDuration="12.909831103s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:12.631718168 +0000 UTC m=+927.627655555" lastFinishedPulling="2026-02-26 08:28:21.251919067 +0000 UTC m=+936.247856484" observedRunningTime="2026-02-26 08:28:22.89808489 +0000 UTC m=+937.894022297" watchObservedRunningTime="2026-02-26 08:28:22.909831103 +0000 UTC m=+937.905768500" Feb 26 08:28:23 crc kubenswrapper[4741]: I0226 08:28:23.857221 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" event={"ID":"b029b8c8-35eb-4509-a29a-9ada4434b899","Type":"ContainerStarted","Data":"e95586be755ee5b70bed29cd2e23113f0e34e9eede5bb569c203dc066a3d0d7a"} Feb 26 08:28:23 crc kubenswrapper[4741]: I0226 08:28:23.860742 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" event={"ID":"aad6cae3-3b9d-4d9e-8549-55da6e10901d","Type":"ContainerStarted","Data":"6ab5fe4ffaeaea8f9ed18b53bf583db8c7de2e71f70c17acfe0b3123542020a7"} Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.907461 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" event={"ID":"b029b8c8-35eb-4509-a29a-9ada4434b899","Type":"ContainerStarted","Data":"b2e579aa6ca1408f98a8b84cc55e27cb1dfc9bd0fc4a3c31e04d4f3088113136"} Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.907864 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.907882 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.911613 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" event={"ID":"aad6cae3-3b9d-4d9e-8549-55da6e10901d","Type":"ContainerStarted","Data":"8c60820f09477bba7115ab3a3d0a36a99ecdca6d47c285240d497073bf7b29f9"} Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.942018 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.942950 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podStartSLOduration=2.785951934 podStartE2EDuration="19.942922916s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:12.029683184 +0000 UTC m=+927.025620571" lastFinishedPulling="2026-02-26 08:28:29.186654166 +0000 UTC m=+944.182591553" observedRunningTime="2026-02-26 08:28:29.937142592 +0000 UTC m=+944.933080019" watchObservedRunningTime="2026-02-26 08:28:29.942922916 +0000 UTC m=+944.938860303" Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.959507 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" Feb 26 08:28:29 crc kubenswrapper[4741]: I0226 08:28:29.973798 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podStartSLOduration=2.71673989 podStartE2EDuration="19.973772502s" podCreationTimestamp="2026-02-26 08:28:10 +0000 UTC" firstStartedPulling="2026-02-26 08:28:11.938953569 +0000 UTC m=+926.934890956" lastFinishedPulling="2026-02-26 08:28:29.195986171 +0000 UTC m=+944.191923568" observedRunningTime="2026-02-26 08:28:29.96420819 +0000 UTC m=+944.960145597" watchObservedRunningTime="2026-02-26 08:28:29.973772502 +0000 UTC m=+944.969709889" Feb 26 08:28:30 crc kubenswrapper[4741]: I0226 08:28:30.921543 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:30 crc kubenswrapper[4741]: I0226 08:28:30.922092 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:30 crc kubenswrapper[4741]: I0226 08:28:30.932253 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:30 crc kubenswrapper[4741]: I0226 08:28:30.932821 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" Feb 26 08:28:40 crc kubenswrapper[4741]: I0226 08:28:40.668342 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" Feb 26 08:28:40 crc kubenswrapper[4741]: I0226 08:28:40.870625 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" Feb 26 08:28:40 crc kubenswrapper[4741]: I0226 08:28:40.918964 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" Feb 26 08:28:41 crc kubenswrapper[4741]: I0226 08:28:41.965586 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 26 08:28:41 crc kubenswrapper[4741]: I0226 08:28:41.967441 4741 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 26 08:28:41 crc kubenswrapper[4741]: I0226 08:28:41.967574 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5e9394a8-a585-40dc-8178-539b51408421" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 08:28:42 crc kubenswrapper[4741]: I0226 08:28:42.004267 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 08:28:51 crc kubenswrapper[4741]: I0226 08:28:51.864707 4741 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 26 08:28:51 crc kubenswrapper[4741]: I0226 08:28:51.865642 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5e9394a8-a585-40dc-8178-539b51408421" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 08:29:01 crc kubenswrapper[4741]: I0226 08:29:01.866416 4741 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 26 08:29:01 crc kubenswrapper[4741]: I0226 08:29:01.866898 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5e9394a8-a585-40dc-8178-539b51408421" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 08:29:11 crc kubenswrapper[4741]: I0226 08:29:11.866261 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.218894 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.221439 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.240075 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.307506 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.307594 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgd9n\" (UniqueName: \"kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.307649 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.410360 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.410432 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgd9n\" (UniqueName: \"kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.410460 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.411041 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.411363 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.435263 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgd9n\" (UniqueName: \"kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n\") pod \"community-operators-bpbj9\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:13 crc kubenswrapper[4741]: I0226 08:29:13.565083 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:14 crc kubenswrapper[4741]: I0226 08:29:14.136096 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:14 crc kubenswrapper[4741]: I0226 08:29:14.543399 4741 generic.go:334] "Generic (PLEG): container finished" podID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerID="94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8" exitCode=0 Feb 26 08:29:14 crc kubenswrapper[4741]: I0226 08:29:14.543495 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerDied","Data":"94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8"} Feb 26 08:29:14 crc kubenswrapper[4741]: I0226 08:29:14.543934 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerStarted","Data":"faca86586b860fa936d9b29912f17e82dbb1c70d0cb1b53daa0fa349907a2065"} Feb 26 08:29:15 crc kubenswrapper[4741]: I0226 08:29:15.578943 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerStarted","Data":"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402"} Feb 26 08:29:16 crc kubenswrapper[4741]: I0226 08:29:16.587917 4741 generic.go:334] "Generic (PLEG): container finished" podID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerID="e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402" exitCode=0 Feb 26 08:29:16 crc kubenswrapper[4741]: I0226 08:29:16.588001 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerDied","Data":"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402"} Feb 26 08:29:17 crc kubenswrapper[4741]: I0226 08:29:17.599262 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerStarted","Data":"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9"} Feb 26 08:29:17 crc kubenswrapper[4741]: I0226 08:29:17.626208 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpbj9" podStartSLOduration=1.875618111 podStartE2EDuration="4.626095318s" podCreationTimestamp="2026-02-26 08:29:13 +0000 UTC" firstStartedPulling="2026-02-26 08:29:14.545795841 +0000 UTC m=+989.541733258" lastFinishedPulling="2026-02-26 08:29:17.296273078 +0000 UTC m=+992.292210465" observedRunningTime="2026-02-26 08:29:17.622057883 +0000 UTC m=+992.617995280" watchObservedRunningTime="2026-02-26 08:29:17.626095318 +0000 UTC m=+992.622032705" Feb 26 08:29:23 crc kubenswrapper[4741]: I0226 08:29:23.565843 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:23 crc kubenswrapper[4741]: I0226 08:29:23.566846 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:23 crc kubenswrapper[4741]: I0226 08:29:23.631889 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:23 crc kubenswrapper[4741]: I0226 08:29:23.708028 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:23 crc kubenswrapper[4741]: I0226 08:29:23.871370 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:25 crc kubenswrapper[4741]: I0226 08:29:25.148915 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:29:25 crc kubenswrapper[4741]: I0226 08:29:25.149433 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:29:25 crc kubenswrapper[4741]: I0226 08:29:25.670383 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpbj9" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="registry-server" containerID="cri-o://7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9" gracePeriod=2 Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.088708 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.174047 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities\") pod \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.174158 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgd9n\" (UniqueName: \"kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n\") pod \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.174378 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content\") pod \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\" (UID: \"bad4a5b4-13e8-49ad-be3d-396ffc43a531\") " Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.175440 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities" (OuterVolumeSpecName: "utilities") pod "bad4a5b4-13e8-49ad-be3d-396ffc43a531" (UID: "bad4a5b4-13e8-49ad-be3d-396ffc43a531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.181898 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n" (OuterVolumeSpecName: "kube-api-access-rgd9n") pod "bad4a5b4-13e8-49ad-be3d-396ffc43a531" (UID: "bad4a5b4-13e8-49ad-be3d-396ffc43a531"). InnerVolumeSpecName "kube-api-access-rgd9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.263935 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bad4a5b4-13e8-49ad-be3d-396ffc43a531" (UID: "bad4a5b4-13e8-49ad-be3d-396ffc43a531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.275403 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.275431 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad4a5b4-13e8-49ad-be3d-396ffc43a531-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.275444 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgd9n\" (UniqueName: \"kubernetes.io/projected/bad4a5b4-13e8-49ad-be3d-396ffc43a531-kube-api-access-rgd9n\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.682247 4741 generic.go:334] "Generic (PLEG): container finished" podID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerID="7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9" exitCode=0 Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.682327 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerDied","Data":"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9"} Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.682838 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpbj9" event={"ID":"bad4a5b4-13e8-49ad-be3d-396ffc43a531","Type":"ContainerDied","Data":"faca86586b860fa936d9b29912f17e82dbb1c70d0cb1b53daa0fa349907a2065"} Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.682436 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpbj9" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.682877 4741 scope.go:117] "RemoveContainer" containerID="7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.728801 4741 scope.go:117] "RemoveContainer" containerID="e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.737477 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.745741 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpbj9"] Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.754638 4741 scope.go:117] "RemoveContainer" containerID="94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.780949 4741 scope.go:117] "RemoveContainer" containerID="7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9" Feb 26 08:29:26 crc kubenswrapper[4741]: E0226 08:29:26.782954 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9\": container with ID starting with 7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9 not found: ID does not exist" containerID="7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.783021 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9"} err="failed to get container status \"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9\": rpc error: code = NotFound desc = could not find container \"7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9\": container with ID starting with 7223ac560f07056aae472deb0c44483f140d2d0155b25251f42a851d6fcc84b9 not found: ID does not exist" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.783069 4741 scope.go:117] "RemoveContainer" containerID="e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402" Feb 26 08:29:26 crc kubenswrapper[4741]: E0226 08:29:26.783704 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402\": container with ID starting with e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402 not found: ID does not exist" containerID="e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.783764 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402"} err="failed to get container status \"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402\": rpc error: code = NotFound desc = could not find container \"e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402\": container with ID starting with e596bbbaa9dacb729a6177152e31de26a5ae5c2a7fda2a2c456e4c1012995402 not found: ID does not exist" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.783803 4741 scope.go:117] "RemoveContainer" containerID="94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8" Feb 26 08:29:26 crc kubenswrapper[4741]: E0226 08:29:26.784901 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8\": container with ID starting with 94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8 not found: ID does not exist" containerID="94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8" Feb 26 08:29:26 crc kubenswrapper[4741]: I0226 08:29:26.784965 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8"} err="failed to get container status \"94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8\": rpc error: code = NotFound desc = could not find container \"94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8\": container with ID starting with 94070b3ca4e93d8193db8c4e7c1b2818a8e15884ff083e86937be960872d46b8 not found: ID does not exist" Feb 26 08:29:27 crc kubenswrapper[4741]: I0226 08:29:27.797072 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" path="/var/lib/kubelet/pods/bad4a5b4-13e8-49ad-be3d-396ffc43a531/volumes" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.276389 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-tpf9l"] Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.278199 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="registry-server" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.278235 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="registry-server" Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.278266 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="extract-utilities" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.278279 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="extract-utilities" Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.278312 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="extract-content" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.278325 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="extract-content" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.278598 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad4a5b4-13e8-49ad-be3d-396ffc43a531" containerName="registry-server" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.279619 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.284022 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.284415 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.284991 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.285989 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.286393 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-c8dqf" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.308739 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.323445 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-tpf9l"] Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.376178 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-tpf9l"] Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.377171 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-4x6gf metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-tpf9l" podUID="cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.377850 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.377942 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378002 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378442 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378539 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378674 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378785 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378833 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378875 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.378941 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.379044 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6gf\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480510 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480591 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480618 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480646 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480682 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480717 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6gf\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480752 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480781 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480804 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480873 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.480899 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.481916 4741 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 26 08:29:30 crc kubenswrapper[4741]: E0226 08:29:30.482004 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics podName:cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f nodeName:}" failed. No retries permitted until 2026-02-26 08:29:30.981982518 +0000 UTC m=+1005.977919895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics") pod "collector-tpf9l" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f") : secret "collector-metrics" not found Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.481992 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.483092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.483464 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.483598 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.483743 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.490844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.492424 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.500076 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.501792 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.502038 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6gf\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.720169 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.735842 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786082 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786177 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786213 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786277 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786572 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir" (OuterVolumeSpecName: "datadir") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786726 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.786873 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787045 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6gf\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787104 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787172 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config" (OuterVolumeSpecName: "config") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787188 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787215 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787254 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787653 4741 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787673 4741 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-datadir\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.787683 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.789021 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.791321 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token" (OuterVolumeSpecName: "sa-token") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.791317 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token" (OuterVolumeSpecName: "collector-token") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.791355 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.792337 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp" (OuterVolumeSpecName: "tmp") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.796290 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf" (OuterVolumeSpecName: "kube-api-access-4x6gf") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "kube-api-access-4x6gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.796691 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889435 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6gf\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-kube-api-access-4x6gf\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889502 4741 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889516 4741 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-tmp\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889530 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889545 4741 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889558 4741 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.889575 4741 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-collector-token\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.990901 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.993358 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.995094 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:30 crc kubenswrapper[4741]: I0226 08:29:30.995848 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") pod \"collector-tpf9l\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " pod="openshift-logging/collector-tpf9l" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.014468 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.093035 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") pod \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\" (UID: \"cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f\") " Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.093538 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.093611 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rh9\" (UniqueName: \"kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.093773 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.096713 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics" (OuterVolumeSpecName: "metrics") pod "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" (UID: "cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.195939 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6rh9\" (UniqueName: \"kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.196003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.196144 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.196199 4741 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.196883 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.196894 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.213068 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6rh9\" (UniqueName: \"kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9\") pod \"certified-operators-wkgt4\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.375619 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.726778 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-tpf9l" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.782515 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-tpf9l"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.799908 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-tpf9l"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.811593 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-xvznb"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.817957 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.826001 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-xvznb"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.837945 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-c8dqf" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.842310 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.842523 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.842660 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.842777 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.842990 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.912978 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.913095 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ea592fd-176d-496d-a1f2-67c6c3215be1-datadir\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.913596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-metrics\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.913689 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.913733 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-syslog-receiver\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914023 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config-openshift-service-cacrt\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914054 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-entrypoint\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914075 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbz2r\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-kube-api-access-sbz2r\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914137 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914309 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-sa-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914450 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-trusted-ca\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:31 crc kubenswrapper[4741]: I0226 08:29:31.914477 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ea592fd-176d-496d-a1f2-67c6c3215be1-tmp\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016052 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-sa-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016158 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-trusted-ca\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016188 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ea592fd-176d-496d-a1f2-67c6c3215be1-tmp\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ea592fd-176d-496d-a1f2-67c6c3215be1-datadir\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016263 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-metrics\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016296 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016323 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-syslog-receiver\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016388 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config-openshift-service-cacrt\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016414 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-entrypoint\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016439 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbz2r\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-kube-api-access-sbz2r\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.016463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.017517 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-trusted-ca\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.017605 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ea592fd-176d-496d-a1f2-67c6c3215be1-datadir\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.018001 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config-openshift-service-cacrt\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.018497 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-config\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.018820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ea592fd-176d-496d-a1f2-67c6c3215be1-entrypoint\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.024603 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-syslog-receiver\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.030621 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-metrics\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.034629 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ea592fd-176d-496d-a1f2-67c6c3215be1-tmp\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.036602 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ea592fd-176d-496d-a1f2-67c6c3215be1-collector-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.040896 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-sa-token\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.041164 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbz2r\" (UniqueName: \"kubernetes.io/projected/8ea592fd-176d-496d-a1f2-67c6c3215be1-kube-api-access-sbz2r\") pod \"collector-xvznb\" (UID: \"8ea592fd-176d-496d-a1f2-67c6c3215be1\") " pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.150801 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-xvznb" Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.648212 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-xvznb"] Feb 26 08:29:32 crc kubenswrapper[4741]: W0226 08:29:32.668853 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea592fd_176d_496d_a1f2_67c6c3215be1.slice/crio-b2ae94ebc10cf40f82e713ee8e3b79bdfa85ecbdeabcda799878546c22bc0448 WatchSource:0}: Error finding container b2ae94ebc10cf40f82e713ee8e3b79bdfa85ecbdeabcda799878546c22bc0448: Status 404 returned error can't find the container with id b2ae94ebc10cf40f82e713ee8e3b79bdfa85ecbdeabcda799878546c22bc0448 Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.738952 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-xvznb" event={"ID":"8ea592fd-176d-496d-a1f2-67c6c3215be1","Type":"ContainerStarted","Data":"b2ae94ebc10cf40f82e713ee8e3b79bdfa85ecbdeabcda799878546c22bc0448"} Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.741044 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc07f1dd-268f-41e1-8130-775a66433816" containerID="0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951" exitCode=0 Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.741147 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerDied","Data":"0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951"} Feb 26 08:29:32 crc kubenswrapper[4741]: I0226 08:29:32.741191 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerStarted","Data":"60893825341b8332eba848a23f886a0158d48079573e641827a5201160125feb"} Feb 26 08:29:33 crc kubenswrapper[4741]: I0226 08:29:33.798078 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f" path="/var/lib/kubelet/pods/cd1ad2e1-7321-49f4-bf93-e8bc9f3eb46f/volumes" Feb 26 08:29:34 crc kubenswrapper[4741]: I0226 08:29:34.759648 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc07f1dd-268f-41e1-8130-775a66433816" containerID="68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1" exitCode=0 Feb 26 08:29:34 crc kubenswrapper[4741]: I0226 08:29:34.759702 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerDied","Data":"68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1"} Feb 26 08:29:35 crc kubenswrapper[4741]: I0226 08:29:35.773831 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerStarted","Data":"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc"} Feb 26 08:29:35 crc kubenswrapper[4741]: I0226 08:29:35.802699 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wkgt4" podStartSLOduration=3.389070277 podStartE2EDuration="5.802684563s" podCreationTimestamp="2026-02-26 08:29:30 +0000 UTC" firstStartedPulling="2026-02-26 08:29:32.743401386 +0000 UTC m=+1007.739338773" lastFinishedPulling="2026-02-26 08:29:35.157015672 +0000 UTC m=+1010.152953059" observedRunningTime="2026-02-26 08:29:35.799837302 +0000 UTC m=+1010.795774689" watchObservedRunningTime="2026-02-26 08:29:35.802684563 +0000 UTC m=+1010.798621950" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.376351 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.377044 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.419965 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.842179 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-xvznb" event={"ID":"8ea592fd-176d-496d-a1f2-67c6c3215be1","Type":"ContainerStarted","Data":"84fec3de16304a94b50fbabde7dc82afed9a82a0b0e217ff8e3f0ae2345263af"} Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.881753 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-xvznb" podStartSLOduration=2.050320972 podStartE2EDuration="10.881720098s" podCreationTimestamp="2026-02-26 08:29:31 +0000 UTC" firstStartedPulling="2026-02-26 08:29:32.672301586 +0000 UTC m=+1007.668238973" lastFinishedPulling="2026-02-26 08:29:41.503700712 +0000 UTC m=+1016.499638099" observedRunningTime="2026-02-26 08:29:41.871805125 +0000 UTC m=+1016.867742532" watchObservedRunningTime="2026-02-26 08:29:41.881720098 +0000 UTC m=+1016.877657495" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.918309 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:41 crc kubenswrapper[4741]: I0226 08:29:41.978804 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:43 crc kubenswrapper[4741]: I0226 08:29:43.863382 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wkgt4" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="registry-server" containerID="cri-o://dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc" gracePeriod=2 Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.303375 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.380694 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities\") pod \"cc07f1dd-268f-41e1-8130-775a66433816\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.380865 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6rh9\" (UniqueName: \"kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9\") pod \"cc07f1dd-268f-41e1-8130-775a66433816\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.380921 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content\") pod \"cc07f1dd-268f-41e1-8130-775a66433816\" (UID: \"cc07f1dd-268f-41e1-8130-775a66433816\") " Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.381737 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities" (OuterVolumeSpecName: "utilities") pod "cc07f1dd-268f-41e1-8130-775a66433816" (UID: "cc07f1dd-268f-41e1-8130-775a66433816"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.389982 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9" (OuterVolumeSpecName: "kube-api-access-p6rh9") pod "cc07f1dd-268f-41e1-8130-775a66433816" (UID: "cc07f1dd-268f-41e1-8130-775a66433816"). InnerVolumeSpecName "kube-api-access-p6rh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.449079 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc07f1dd-268f-41e1-8130-775a66433816" (UID: "cc07f1dd-268f-41e1-8130-775a66433816"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.483799 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.483886 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6rh9\" (UniqueName: \"kubernetes.io/projected/cc07f1dd-268f-41e1-8130-775a66433816-kube-api-access-p6rh9\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.483904 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc07f1dd-268f-41e1-8130-775a66433816-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.875539 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc07f1dd-268f-41e1-8130-775a66433816" containerID="dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc" exitCode=0 Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.875611 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerDied","Data":"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc"} Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.875655 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkgt4" event={"ID":"cc07f1dd-268f-41e1-8130-775a66433816","Type":"ContainerDied","Data":"60893825341b8332eba848a23f886a0158d48079573e641827a5201160125feb"} Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.875685 4741 scope.go:117] "RemoveContainer" containerID="dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.875616 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkgt4" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.909983 4741 scope.go:117] "RemoveContainer" containerID="68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.929323 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.940776 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wkgt4"] Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.955275 4741 scope.go:117] "RemoveContainer" containerID="0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.977221 4741 scope.go:117] "RemoveContainer" containerID="dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc" Feb 26 08:29:44 crc kubenswrapper[4741]: E0226 08:29:44.977780 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc\": container with ID starting with dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc not found: ID does not exist" containerID="dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.977819 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc"} err="failed to get container status \"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc\": rpc error: code = NotFound desc = could not find container \"dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc\": container with ID starting with dd99455091c06880e3ec88711bde13c939be8826d28cb606ceb36c1e60509ddc not found: ID does not exist" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.977848 4741 scope.go:117] "RemoveContainer" containerID="68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1" Feb 26 08:29:44 crc kubenswrapper[4741]: E0226 08:29:44.978178 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1\": container with ID starting with 68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1 not found: ID does not exist" containerID="68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.978212 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1"} err="failed to get container status \"68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1\": rpc error: code = NotFound desc = could not find container \"68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1\": container with ID starting with 68a4f4576eae942031ee4ebe517a6e01d57efa953a1843150050ef49df1e31b1 not found: ID does not exist" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.978233 4741 scope.go:117] "RemoveContainer" containerID="0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951" Feb 26 08:29:44 crc kubenswrapper[4741]: E0226 08:29:44.978731 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951\": container with ID starting with 0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951 not found: ID does not exist" containerID="0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951" Feb 26 08:29:44 crc kubenswrapper[4741]: I0226 08:29:44.978758 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951"} err="failed to get container status \"0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951\": rpc error: code = NotFound desc = could not find container \"0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951\": container with ID starting with 0045b84be2e91e8ff15f872b10ae7c6f72dba68ea8603dbf9aee8b7730dc9951 not found: ID does not exist" Feb 26 08:29:45 crc kubenswrapper[4741]: I0226 08:29:45.805341 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc07f1dd-268f-41e1-8130-775a66433816" path="/var/lib/kubelet/pods/cc07f1dd-268f-41e1-8130-775a66433816/volumes" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.683204 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:29:47 crc kubenswrapper[4741]: E0226 08:29:47.683812 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="extract-utilities" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.683842 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="extract-utilities" Feb 26 08:29:47 crc kubenswrapper[4741]: E0226 08:29:47.683875 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="registry-server" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.683890 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="registry-server" Feb 26 08:29:47 crc kubenswrapper[4741]: E0226 08:29:47.683923 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="extract-content" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.683938 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="extract-content" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.684424 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc07f1dd-268f-41e1-8130-775a66433816" containerName="registry-server" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.686702 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.718104 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.759591 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.759684 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vxq\" (UniqueName: \"kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.759830 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.861969 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.862448 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72vxq\" (UniqueName: \"kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.862958 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.863501 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.863873 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:47 crc kubenswrapper[4741]: I0226 08:29:47.901088 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72vxq\" (UniqueName: \"kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq\") pod \"redhat-marketplace-bgvg2\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:48 crc kubenswrapper[4741]: I0226 08:29:48.058756 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:48 crc kubenswrapper[4741]: I0226 08:29:48.365901 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:29:48 crc kubenswrapper[4741]: I0226 08:29:48.917555 4741 generic.go:334] "Generic (PLEG): container finished" podID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerID="a82dea94c325d1920446c5c6437acab69390ff987bc75afea5416c4539872df7" exitCode=0 Feb 26 08:29:48 crc kubenswrapper[4741]: I0226 08:29:48.919597 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerDied","Data":"a82dea94c325d1920446c5c6437acab69390ff987bc75afea5416c4539872df7"} Feb 26 08:29:48 crc kubenswrapper[4741]: I0226 08:29:48.919628 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerStarted","Data":"2ca60ace1af64b1ee4c4cf5ecdbec22bbbafd8c8cad16c179753be8f0eed9d30"} Feb 26 08:29:49 crc kubenswrapper[4741]: I0226 08:29:49.929436 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerStarted","Data":"c4a67fb2a22c91ab267a92fe60ca50b8a9d5a0a6c8489477930c38650625c01f"} Feb 26 08:29:50 crc kubenswrapper[4741]: I0226 08:29:50.940222 4741 generic.go:334] "Generic (PLEG): container finished" podID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerID="c4a67fb2a22c91ab267a92fe60ca50b8a9d5a0a6c8489477930c38650625c01f" exitCode=0 Feb 26 08:29:50 crc kubenswrapper[4741]: I0226 08:29:50.940309 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerDied","Data":"c4a67fb2a22c91ab267a92fe60ca50b8a9d5a0a6c8489477930c38650625c01f"} Feb 26 08:29:51 crc kubenswrapper[4741]: I0226 08:29:51.953416 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerStarted","Data":"ef83b6e4366b50df5b55e74dbfc472c16e6de52373363590ab60a25a0dfe89f0"} Feb 26 08:29:51 crc kubenswrapper[4741]: I0226 08:29:51.975706 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgvg2" podStartSLOduration=2.5016394220000002 podStartE2EDuration="4.975681173s" podCreationTimestamp="2026-02-26 08:29:47 +0000 UTC" firstStartedPulling="2026-02-26 08:29:48.921101421 +0000 UTC m=+1023.917038798" lastFinishedPulling="2026-02-26 08:29:51.395143142 +0000 UTC m=+1026.391080549" observedRunningTime="2026-02-26 08:29:51.971166284 +0000 UTC m=+1026.967103681" watchObservedRunningTime="2026-02-26 08:29:51.975681173 +0000 UTC m=+1026.971618570" Feb 26 08:29:55 crc kubenswrapper[4741]: I0226 08:29:55.149940 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:29:55 crc kubenswrapper[4741]: I0226 08:29:55.150513 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:29:58 crc kubenswrapper[4741]: I0226 08:29:58.059010 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:58 crc kubenswrapper[4741]: I0226 08:29:58.059577 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:58 crc kubenswrapper[4741]: I0226 08:29:58.116921 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:59 crc kubenswrapper[4741]: I0226 08:29:59.103590 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:29:59 crc kubenswrapper[4741]: I0226 08:29:59.174434 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.160091 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534910-wqbsz"] Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.161532 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.164138 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.164202 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.164383 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.171359 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf"] Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.173216 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.176409 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.176628 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.179730 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534910-wqbsz"] Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.190858 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf"] Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.254990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.255305 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnq9\" (UniqueName: \"kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.255371 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.255535 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qbp\" (UniqueName: \"kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp\") pod \"auto-csr-approver-29534910-wqbsz\" (UID: \"eba5bd55-de2f-4879-b367-dadffdd11853\") " pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.356973 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.357060 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jnq9\" (UniqueName: \"kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.357096 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.357165 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4qbp\" (UniqueName: \"kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp\") pod \"auto-csr-approver-29534910-wqbsz\" (UID: \"eba5bd55-de2f-4879-b367-dadffdd11853\") " pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.358526 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.369911 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.381354 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4qbp\" (UniqueName: \"kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp\") pod \"auto-csr-approver-29534910-wqbsz\" (UID: \"eba5bd55-de2f-4879-b367-dadffdd11853\") " pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.382988 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jnq9\" (UniqueName: \"kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9\") pod \"collect-profiles-29534910-g46zf\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.485409 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.498728 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:00 crc kubenswrapper[4741]: I0226 08:30:00.737955 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534910-wqbsz"] Feb 26 08:30:00 crc kubenswrapper[4741]: W0226 08:30:00.760078 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeba5bd55_de2f_4879_b367_dadffdd11853.slice/crio-18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5 WatchSource:0}: Error finding container 18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5: Status 404 returned error can't find the container with id 18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5 Feb 26 08:30:01 crc kubenswrapper[4741]: I0226 08:30:01.027520 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf"] Feb 26 08:30:01 crc kubenswrapper[4741]: W0226 08:30:01.039829 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8470d1f_2910_449c_96e1_e8dbe81c8c4d.slice/crio-4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab WatchSource:0}: Error finding container 4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab: Status 404 returned error can't find the container with id 4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab Feb 26 08:30:01 crc kubenswrapper[4741]: I0226 08:30:01.056230 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" event={"ID":"e8470d1f-2910-449c-96e1-e8dbe81c8c4d","Type":"ContainerStarted","Data":"4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab"} Feb 26 08:30:01 crc kubenswrapper[4741]: I0226 08:30:01.059419 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" event={"ID":"eba5bd55-de2f-4879-b367-dadffdd11853","Type":"ContainerStarted","Data":"18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5"} Feb 26 08:30:01 crc kubenswrapper[4741]: I0226 08:30:01.059690 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgvg2" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="registry-server" containerID="cri-o://ef83b6e4366b50df5b55e74dbfc472c16e6de52373363590ab60a25a0dfe89f0" gracePeriod=2 Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.071892 4741 generic.go:334] "Generic (PLEG): container finished" podID="e8470d1f-2910-449c-96e1-e8dbe81c8c4d" containerID="02f9906b9b45c7bfdaa6d7168f385984c426c2443b1af69e275dbe53b6bbd815" exitCode=0 Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.072094 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" event={"ID":"e8470d1f-2910-449c-96e1-e8dbe81c8c4d","Type":"ContainerDied","Data":"02f9906b9b45c7bfdaa6d7168f385984c426c2443b1af69e275dbe53b6bbd815"} Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.075432 4741 generic.go:334] "Generic (PLEG): container finished" podID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerID="ef83b6e4366b50df5b55e74dbfc472c16e6de52373363590ab60a25a0dfe89f0" exitCode=0 Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.075478 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerDied","Data":"ef83b6e4366b50df5b55e74dbfc472c16e6de52373363590ab60a25a0dfe89f0"} Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.075510 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgvg2" event={"ID":"20b603cb-05b1-4c5c-b964-e4adb4b13b4b","Type":"ContainerDied","Data":"2ca60ace1af64b1ee4c4cf5ecdbec22bbbafd8c8cad16c179753be8f0eed9d30"} Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.075526 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca60ace1af64b1ee4c4cf5ecdbec22bbbafd8c8cad16c179753be8f0eed9d30" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.110921 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.134983 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities\") pod \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.135141 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content\") pod \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.135205 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72vxq\" (UniqueName: \"kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq\") pod \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\" (UID: \"20b603cb-05b1-4c5c-b964-e4adb4b13b4b\") " Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.136021 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities" (OuterVolumeSpecName: "utilities") pod "20b603cb-05b1-4c5c-b964-e4adb4b13b4b" (UID: "20b603cb-05b1-4c5c-b964-e4adb4b13b4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.144414 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq" (OuterVolumeSpecName: "kube-api-access-72vxq") pod "20b603cb-05b1-4c5c-b964-e4adb4b13b4b" (UID: "20b603cb-05b1-4c5c-b964-e4adb4b13b4b"). InnerVolumeSpecName "kube-api-access-72vxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.160327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20b603cb-05b1-4c5c-b964-e4adb4b13b4b" (UID: "20b603cb-05b1-4c5c-b964-e4adb4b13b4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.244432 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.244498 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:02 crc kubenswrapper[4741]: I0226 08:30:02.244528 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72vxq\" (UniqueName: \"kubernetes.io/projected/20b603cb-05b1-4c5c-b964-e4adb4b13b4b-kube-api-access-72vxq\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.097880 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgvg2" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.101235 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" event={"ID":"eba5bd55-de2f-4879-b367-dadffdd11853","Type":"ContainerStarted","Data":"170dde72cf961223739bba17e82b12705354d780c4e7702e9fb54a910ca80b36"} Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.139412 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" podStartSLOduration=1.470113015 podStartE2EDuration="3.139380372s" podCreationTimestamp="2026-02-26 08:30:00 +0000 UTC" firstStartedPulling="2026-02-26 08:30:00.765711036 +0000 UTC m=+1035.761648423" lastFinishedPulling="2026-02-26 08:30:02.434978393 +0000 UTC m=+1037.430915780" observedRunningTime="2026-02-26 08:30:03.128249845 +0000 UTC m=+1038.124187232" watchObservedRunningTime="2026-02-26 08:30:03.139380372 +0000 UTC m=+1038.135317769" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.163280 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.167219 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgvg2"] Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.491217 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.583422 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume\") pod \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.583494 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jnq9\" (UniqueName: \"kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9\") pod \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.583619 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume\") pod \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\" (UID: \"e8470d1f-2910-449c-96e1-e8dbe81c8c4d\") " Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.585264 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8470d1f-2910-449c-96e1-e8dbe81c8c4d" (UID: "e8470d1f-2910-449c-96e1-e8dbe81c8c4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.592356 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8470d1f-2910-449c-96e1-e8dbe81c8c4d" (UID: "e8470d1f-2910-449c-96e1-e8dbe81c8c4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.597331 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9" (OuterVolumeSpecName: "kube-api-access-7jnq9") pod "e8470d1f-2910-449c-96e1-e8dbe81c8c4d" (UID: "e8470d1f-2910-449c-96e1-e8dbe81c8c4d"). InnerVolumeSpecName "kube-api-access-7jnq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.685454 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.685869 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.685882 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jnq9\" (UniqueName: \"kubernetes.io/projected/e8470d1f-2910-449c-96e1-e8dbe81c8c4d-kube-api-access-7jnq9\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:03 crc kubenswrapper[4741]: I0226 08:30:03.803177 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" path="/var/lib/kubelet/pods/20b603cb-05b1-4c5c-b964-e4adb4b13b4b/volumes" Feb 26 08:30:04 crc kubenswrapper[4741]: I0226 08:30:04.107757 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" event={"ID":"e8470d1f-2910-449c-96e1-e8dbe81c8c4d","Type":"ContainerDied","Data":"4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab"} Feb 26 08:30:04 crc kubenswrapper[4741]: I0226 08:30:04.107827 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4479b6756fc40ed166a4a617ce0b3f8e0f56bdc33337df8b9936a35cb28972ab" Feb 26 08:30:04 crc kubenswrapper[4741]: I0226 08:30:04.107791 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf" Feb 26 08:30:04 crc kubenswrapper[4741]: I0226 08:30:04.109514 4741 generic.go:334] "Generic (PLEG): container finished" podID="eba5bd55-de2f-4879-b367-dadffdd11853" containerID="170dde72cf961223739bba17e82b12705354d780c4e7702e9fb54a910ca80b36" exitCode=0 Feb 26 08:30:04 crc kubenswrapper[4741]: I0226 08:30:04.109556 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" event={"ID":"eba5bd55-de2f-4879-b367-dadffdd11853","Type":"ContainerDied","Data":"170dde72cf961223739bba17e82b12705354d780c4e7702e9fb54a910ca80b36"} Feb 26 08:30:05 crc kubenswrapper[4741]: I0226 08:30:05.543524 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:05 crc kubenswrapper[4741]: I0226 08:30:05.626363 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4qbp\" (UniqueName: \"kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp\") pod \"eba5bd55-de2f-4879-b367-dadffdd11853\" (UID: \"eba5bd55-de2f-4879-b367-dadffdd11853\") " Feb 26 08:30:05 crc kubenswrapper[4741]: I0226 08:30:05.637184 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp" (OuterVolumeSpecName: "kube-api-access-l4qbp") pod "eba5bd55-de2f-4879-b367-dadffdd11853" (UID: "eba5bd55-de2f-4879-b367-dadffdd11853"). InnerVolumeSpecName "kube-api-access-l4qbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:30:05 crc kubenswrapper[4741]: I0226 08:30:05.744187 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4qbp\" (UniqueName: \"kubernetes.io/projected/eba5bd55-de2f-4879-b367-dadffdd11853-kube-api-access-l4qbp\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:06 crc kubenswrapper[4741]: I0226 08:30:06.128404 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" event={"ID":"eba5bd55-de2f-4879-b367-dadffdd11853","Type":"ContainerDied","Data":"18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5"} Feb 26 08:30:06 crc kubenswrapper[4741]: I0226 08:30:06.128899 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ecd51f4248feea8525a03cccb34b1ed1a16480020989a5db775e83c38725d5" Feb 26 08:30:06 crc kubenswrapper[4741]: I0226 08:30:06.128515 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534910-wqbsz" Feb 26 08:30:06 crc kubenswrapper[4741]: I0226 08:30:06.205211 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534904-2jxvw"] Feb 26 08:30:06 crc kubenswrapper[4741]: I0226 08:30:06.214387 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534904-2jxvw"] Feb 26 08:30:07 crc kubenswrapper[4741]: I0226 08:30:07.798420 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c" path="/var/lib/kubelet/pods/3f25b58c-82c9-44f0-ba1e-35ebfa1cde3c/volumes" Feb 26 08:30:11 crc kubenswrapper[4741]: I0226 08:30:11.657407 4741 scope.go:117] "RemoveContainer" containerID="772393ae344a81ae49cb5da8b2bba006e72fe5aaf9324748af2599f62f5d1514" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517055 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc"] Feb 26 08:30:12 crc kubenswrapper[4741]: E0226 08:30:12.517705 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8470d1f-2910-449c-96e1-e8dbe81c8c4d" containerName="collect-profiles" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517727 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8470d1f-2910-449c-96e1-e8dbe81c8c4d" containerName="collect-profiles" Feb 26 08:30:12 crc kubenswrapper[4741]: E0226 08:30:12.517740 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba5bd55-de2f-4879-b367-dadffdd11853" containerName="oc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517750 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba5bd55-de2f-4879-b367-dadffdd11853" containerName="oc" Feb 26 08:30:12 crc kubenswrapper[4741]: E0226 08:30:12.517765 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="extract-content" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517772 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="extract-content" Feb 26 08:30:12 crc kubenswrapper[4741]: E0226 08:30:12.517783 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="extract-utilities" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517789 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="extract-utilities" Feb 26 08:30:12 crc kubenswrapper[4741]: E0226 08:30:12.517796 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="registry-server" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517805 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="registry-server" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517958 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b603cb-05b1-4c5c-b964-e4adb4b13b4b" containerName="registry-server" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517973 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8470d1f-2910-449c-96e1-e8dbe81c8c4d" containerName="collect-profiles" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.517983 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba5bd55-de2f-4879-b367-dadffdd11853" containerName="oc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.519164 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.521145 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.534712 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc"] Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.669195 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwvp\" (UniqueName: \"kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.669291 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.669430 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.771465 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwvp\" (UniqueName: \"kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.771577 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.771638 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.772231 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.772262 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.803264 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwvp\" (UniqueName: \"kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:12 crc kubenswrapper[4741]: I0226 08:30:12.841972 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:13 crc kubenswrapper[4741]: I0226 08:30:13.351251 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc"] Feb 26 08:30:14 crc kubenswrapper[4741]: I0226 08:30:14.196853 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerID="9203670e09740c07af8f3745010eb560d0597338152a4208748e59743952335b" exitCode=0 Feb 26 08:30:14 crc kubenswrapper[4741]: I0226 08:30:14.196941 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" event={"ID":"fa5f5a60-f494-43cb-9137-51ab0568037a","Type":"ContainerDied","Data":"9203670e09740c07af8f3745010eb560d0597338152a4208748e59743952335b"} Feb 26 08:30:14 crc kubenswrapper[4741]: I0226 08:30:14.197048 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" event={"ID":"fa5f5a60-f494-43cb-9137-51ab0568037a","Type":"ContainerStarted","Data":"4a358d4b707d9a5d9f921777817b039dc2227384803f547a37abcc15577b9bcb"} Feb 26 08:30:19 crc kubenswrapper[4741]: I0226 08:30:19.283280 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerID="7c92f5bc3801549b2c834323f805d9135eb96703381368e9e8e5c286020e2214" exitCode=0 Feb 26 08:30:19 crc kubenswrapper[4741]: I0226 08:30:19.283364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" event={"ID":"fa5f5a60-f494-43cb-9137-51ab0568037a","Type":"ContainerDied","Data":"7c92f5bc3801549b2c834323f805d9135eb96703381368e9e8e5c286020e2214"} Feb 26 08:30:20 crc kubenswrapper[4741]: I0226 08:30:20.294330 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerID="d547d7330ec3e08911c32b11c6289bde5f159621012c9e9c7d5bcccada556aff" exitCode=0 Feb 26 08:30:20 crc kubenswrapper[4741]: I0226 08:30:20.294461 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" event={"ID":"fa5f5a60-f494-43cb-9137-51ab0568037a","Type":"ContainerDied","Data":"d547d7330ec3e08911c32b11c6289bde5f159621012c9e9c7d5bcccada556aff"} Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.687646 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.844473 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle\") pod \"fa5f5a60-f494-43cb-9137-51ab0568037a\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.844840 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util\") pod \"fa5f5a60-f494-43cb-9137-51ab0568037a\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.845194 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwvp\" (UniqueName: \"kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp\") pod \"fa5f5a60-f494-43cb-9137-51ab0568037a\" (UID: \"fa5f5a60-f494-43cb-9137-51ab0568037a\") " Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.845210 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle" (OuterVolumeSpecName: "bundle") pod "fa5f5a60-f494-43cb-9137-51ab0568037a" (UID: "fa5f5a60-f494-43cb-9137-51ab0568037a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.854510 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp" (OuterVolumeSpecName: "kube-api-access-jdwvp") pod "fa5f5a60-f494-43cb-9137-51ab0568037a" (UID: "fa5f5a60-f494-43cb-9137-51ab0568037a"). InnerVolumeSpecName "kube-api-access-jdwvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.867827 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util" (OuterVolumeSpecName: "util") pod "fa5f5a60-f494-43cb-9137-51ab0568037a" (UID: "fa5f5a60-f494-43cb-9137-51ab0568037a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.947360 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwvp\" (UniqueName: \"kubernetes.io/projected/fa5f5a60-f494-43cb-9137-51ab0568037a-kube-api-access-jdwvp\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.947395 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:21 crc kubenswrapper[4741]: I0226 08:30:21.947408 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa5f5a60-f494-43cb-9137-51ab0568037a-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:30:22 crc kubenswrapper[4741]: I0226 08:30:22.317139 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" event={"ID":"fa5f5a60-f494-43cb-9137-51ab0568037a","Type":"ContainerDied","Data":"4a358d4b707d9a5d9f921777817b039dc2227384803f547a37abcc15577b9bcb"} Feb 26 08:30:22 crc kubenswrapper[4741]: I0226 08:30:22.317707 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a358d4b707d9a5d9f921777817b039dc2227384803f547a37abcc15577b9bcb" Feb 26 08:30:22 crc kubenswrapper[4741]: I0226 08:30:22.317379 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc" Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.148950 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.149530 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.149593 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.150387 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.150445 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd" gracePeriod=600 Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.345876 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd" exitCode=0 Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.345944 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd"} Feb 26 08:30:25 crc kubenswrapper[4741]: I0226 08:30:25.346421 4741 scope.go:117] "RemoveContainer" containerID="5317dc5f3c75a59e412204491b2519a9de9fcc656951e4938ef7f60d11fdcaab" Feb 26 08:30:26 crc kubenswrapper[4741]: I0226 08:30:26.365867 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a"} Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.529883 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4"] Feb 26 08:30:30 crc kubenswrapper[4741]: E0226 08:30:30.530728 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="extract" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.530750 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="extract" Feb 26 08:30:30 crc kubenswrapper[4741]: E0226 08:30:30.530771 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="util" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.530779 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="util" Feb 26 08:30:30 crc kubenswrapper[4741]: E0226 08:30:30.530810 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="pull" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.530823 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="pull" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.530988 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5f5a60-f494-43cb-9137-51ab0568037a" containerName="extract" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.531705 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.546396 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4"] Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.546829 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bcx5k" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.547122 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.547247 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.612654 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ssqv\" (UniqueName: \"kubernetes.io/projected/b0db9695-b9e5-440a-a1ad-aca0d5386fc6-kube-api-access-5ssqv\") pod \"nmstate-operator-75c5dccd6c-srzj4\" (UID: \"b0db9695-b9e5-440a-a1ad-aca0d5386fc6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.714699 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ssqv\" (UniqueName: \"kubernetes.io/projected/b0db9695-b9e5-440a-a1ad-aca0d5386fc6-kube-api-access-5ssqv\") pod \"nmstate-operator-75c5dccd6c-srzj4\" (UID: \"b0db9695-b9e5-440a-a1ad-aca0d5386fc6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.737755 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ssqv\" (UniqueName: \"kubernetes.io/projected/b0db9695-b9e5-440a-a1ad-aca0d5386fc6-kube-api-access-5ssqv\") pod \"nmstate-operator-75c5dccd6c-srzj4\" (UID: \"b0db9695-b9e5-440a-a1ad-aca0d5386fc6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" Feb 26 08:30:30 crc kubenswrapper[4741]: I0226 08:30:30.933885 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" Feb 26 08:30:31 crc kubenswrapper[4741]: I0226 08:30:31.401157 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4"] Feb 26 08:30:31 crc kubenswrapper[4741]: I0226 08:30:31.419462 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" event={"ID":"b0db9695-b9e5-440a-a1ad-aca0d5386fc6","Type":"ContainerStarted","Data":"f834fd561c311cf6734e47fe3e8cc1f6079bda2575102c7eb016ebec68723b3a"} Feb 26 08:30:38 crc kubenswrapper[4741]: I0226 08:30:38.529758 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" event={"ID":"b0db9695-b9e5-440a-a1ad-aca0d5386fc6","Type":"ContainerStarted","Data":"fc8ee2e98fc249d2cf30b57f807bcaec603213f564c41feebbdd88e8585bed65"} Feb 26 08:30:38 crc kubenswrapper[4741]: I0226 08:30:38.566265 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-srzj4" podStartSLOduration=2.427214802 podStartE2EDuration="8.56622943s" podCreationTimestamp="2026-02-26 08:30:30 +0000 UTC" firstStartedPulling="2026-02-26 08:30:31.40888568 +0000 UTC m=+1066.404823067" lastFinishedPulling="2026-02-26 08:30:37.547900308 +0000 UTC m=+1072.543837695" observedRunningTime="2026-02-26 08:30:38.555981358 +0000 UTC m=+1073.551918805" watchObservedRunningTime="2026-02-26 08:30:38.56622943 +0000 UTC m=+1073.562166827" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.617786 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-fl4pn"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.619243 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.621158 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vfkq9" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.634699 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.636289 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.638793 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.641155 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-fl4pn"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.652053 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.657702 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lc9nj"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.658996 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.698954 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zrg\" (UniqueName: \"kubernetes.io/projected/241e6742-0057-4919-94ff-1653ba2ebeba-kube-api-access-99zrg\") pod \"nmstate-metrics-69594cc75-fl4pn\" (UID: \"241e6742-0057-4919-94ff-1653ba2ebeba\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.800857 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7mt\" (UniqueName: \"kubernetes.io/projected/c06d2b98-49e9-4e2b-9b13-498c00d387a8-kube-api-access-8d7mt\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.800962 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.801024 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnnct\" (UniqueName: \"kubernetes.io/projected/dfce00da-1ed0-4246-af75-e66c5aa1bd39-kube-api-access-rnnct\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.801087 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zrg\" (UniqueName: \"kubernetes.io/projected/241e6742-0057-4919-94ff-1653ba2ebeba-kube-api-access-99zrg\") pod \"nmstate-metrics-69594cc75-fl4pn\" (UID: \"241e6742-0057-4919-94ff-1653ba2ebeba\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.801136 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-nmstate-lock\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.801537 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-ovs-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.801688 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-dbus-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.815667 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.816760 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.835319 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hlbjq" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.835708 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.836032 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.838532 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zrg\" (UniqueName: \"kubernetes.io/projected/241e6742-0057-4919-94ff-1653ba2ebeba-kube-api-access-99zrg\") pod \"nmstate-metrics-69594cc75-fl4pn\" (UID: \"241e6742-0057-4919-94ff-1653ba2ebeba\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.875629 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6"] Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903529 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-ovs-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903616 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-dbus-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903664 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7mt\" (UniqueName: \"kubernetes.io/projected/c06d2b98-49e9-4e2b-9b13-498c00d387a8-kube-api-access-8d7mt\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903695 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903689 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-ovs-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903721 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.903878 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnnct\" (UniqueName: \"kubernetes.io/projected/dfce00da-1ed0-4246-af75-e66c5aa1bd39-kube-api-access-rnnct\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.904047 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.904071 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-nmstate-lock\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.904125 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrr5\" (UniqueName: \"kubernetes.io/projected/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-kube-api-access-xdrr5\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.904149 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-dbus-socket\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: E0226 08:30:39.904256 4741 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 26 08:30:39 crc kubenswrapper[4741]: E0226 08:30:39.904326 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair podName:dfce00da-1ed0-4246-af75-e66c5aa1bd39 nodeName:}" failed. No retries permitted until 2026-02-26 08:30:40.404301717 +0000 UTC m=+1075.400239104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair") pod "nmstate-webhook-786f45cff4-8lpzw" (UID: "dfce00da-1ed0-4246-af75-e66c5aa1bd39") : secret "openshift-nmstate-webhook" not found Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.904530 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c06d2b98-49e9-4e2b-9b13-498c00d387a8-nmstate-lock\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.932553 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7mt\" (UniqueName: \"kubernetes.io/projected/c06d2b98-49e9-4e2b-9b13-498c00d387a8-kube-api-access-8d7mt\") pod \"nmstate-handler-lc9nj\" (UID: \"c06d2b98-49e9-4e2b-9b13-498c00d387a8\") " pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.933580 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnnct\" (UniqueName: \"kubernetes.io/projected/dfce00da-1ed0-4246-af75-e66c5aa1bd39-kube-api-access-rnnct\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.942976 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" Feb 26 08:30:39 crc kubenswrapper[4741]: I0226 08:30:39.980457 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.008033 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.008146 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.008193 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrr5\" (UniqueName: \"kubernetes.io/projected/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-kube-api-access-xdrr5\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: E0226 08:30:40.008573 4741 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 26 08:30:40 crc kubenswrapper[4741]: E0226 08:30:40.008645 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert podName:1b75dec6-9aac-4c45-aab5-5a08eed4baa5 nodeName:}" failed. No retries permitted until 2026-02-26 08:30:40.508623368 +0000 UTC m=+1075.504560755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-lbpr6" (UID: "1b75dec6-9aac-4c45-aab5-5a08eed4baa5") : secret "plugin-serving-cert" not found Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.009522 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: W0226 08:30:40.030245 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06d2b98_49e9_4e2b_9b13_498c00d387a8.slice/crio-1e55ed93f6208434c8333ccc49a443fa02ab0e7a67389838167f8f564c591740 WatchSource:0}: Error finding container 1e55ed93f6208434c8333ccc49a443fa02ab0e7a67389838167f8f564c591740: Status 404 returned error can't find the container with id 1e55ed93f6208434c8333ccc49a443fa02ab0e7a67389838167f8f564c591740 Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.036500 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrr5\" (UniqueName: \"kubernetes.io/projected/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-kube-api-access-xdrr5\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.043055 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.044465 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.063319 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.215383 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.215872 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.215928 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.215985 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vc5\" (UniqueName: \"kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.216034 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.216122 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.216143 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.317917 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.317974 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vc5\" (UniqueName: \"kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.318013 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.318085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.318129 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.318151 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.318191 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.319176 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.319676 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.321138 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.322734 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.327636 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.327679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.340529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vc5\" (UniqueName: \"kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5\") pod \"console-7bcfd56675-fs5kg\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.362739 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.420758 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.424742 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dfce00da-1ed0-4246-af75-e66c5aa1bd39-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-8lpzw\" (UID: \"dfce00da-1ed0-4246-af75-e66c5aa1bd39\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.460518 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-fl4pn"] Feb 26 08:30:40 crc kubenswrapper[4741]: W0226 08:30:40.470737 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod241e6742_0057_4919_94ff_1653ba2ebeba.slice/crio-7536ceba9682e7e13fea068e79830f0f5df9630bda2e38a056f36ba705dabb6a WatchSource:0}: Error finding container 7536ceba9682e7e13fea068e79830f0f5df9630bda2e38a056f36ba705dabb6a: Status 404 returned error can't find the container with id 7536ceba9682e7e13fea068e79830f0f5df9630bda2e38a056f36ba705dabb6a Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.523234 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.529864 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75dec6-9aac-4c45-aab5-5a08eed4baa5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-lbpr6\" (UID: \"1b75dec6-9aac-4c45-aab5-5a08eed4baa5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.550498 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.550884 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" event={"ID":"241e6742-0057-4919-94ff-1653ba2ebeba","Type":"ContainerStarted","Data":"7536ceba9682e7e13fea068e79830f0f5df9630bda2e38a056f36ba705dabb6a"} Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.552311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lc9nj" event={"ID":"c06d2b98-49e9-4e2b-9b13-498c00d387a8","Type":"ContainerStarted","Data":"1e55ed93f6208434c8333ccc49a443fa02ab0e7a67389838167f8f564c591740"} Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.734496 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" Feb 26 08:30:40 crc kubenswrapper[4741]: I0226 08:30:40.821140 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.008855 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6"] Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.025678 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw"] Feb 26 08:30:41 crc kubenswrapper[4741]: W0226 08:30:41.037978 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfce00da_1ed0_4246_af75_e66c5aa1bd39.slice/crio-d44a66a97912cbd7f2cafdf4937500fb16c7458328e68ebc8b82fa10aa97c9ff WatchSource:0}: Error finding container d44a66a97912cbd7f2cafdf4937500fb16c7458328e68ebc8b82fa10aa97c9ff: Status 404 returned error can't find the container with id d44a66a97912cbd7f2cafdf4937500fb16c7458328e68ebc8b82fa10aa97c9ff Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.565080 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" event={"ID":"1b75dec6-9aac-4c45-aab5-5a08eed4baa5","Type":"ContainerStarted","Data":"f7771868b44865d010623d08af7cd0bc4e87eca891a3191f8ad566064bbd12bb"} Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.567158 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bcfd56675-fs5kg" event={"ID":"7920dbc4-e9a6-40ab-b766-2546575f2014","Type":"ContainerStarted","Data":"15b7c5f850cd6c4cf26f64e2f1fca1664b43778bb02a1b8e40306ce3a5b21280"} Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.567210 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bcfd56675-fs5kg" event={"ID":"7920dbc4-e9a6-40ab-b766-2546575f2014","Type":"ContainerStarted","Data":"e0e52e64b6f14257197ce3029ca53805f49bbadd709669b8122aef4ac1e90e6e"} Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.568724 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" event={"ID":"dfce00da-1ed0-4246-af75-e66c5aa1bd39","Type":"ContainerStarted","Data":"d44a66a97912cbd7f2cafdf4937500fb16c7458328e68ebc8b82fa10aa97c9ff"} Feb 26 08:30:41 crc kubenswrapper[4741]: I0226 08:30:41.593032 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bcfd56675-fs5kg" podStartSLOduration=2.593005361 podStartE2EDuration="2.593005361s" podCreationTimestamp="2026-02-26 08:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:30:41.590601953 +0000 UTC m=+1076.586539360" watchObservedRunningTime="2026-02-26 08:30:41.593005361 +0000 UTC m=+1076.588942748" Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.600895 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" event={"ID":"dfce00da-1ed0-4246-af75-e66c5aa1bd39","Type":"ContainerStarted","Data":"c718e1a37d9a7348a650bd21f5b22bce4ecc4018827cd64cdce7a551711c3dcb"} Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.601555 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.603848 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lc9nj" event={"ID":"c06d2b98-49e9-4e2b-9b13-498c00d387a8","Type":"ContainerStarted","Data":"92b58a560cf4dc408cab75ab79b45da3b2d5646da80978b5c2401472f9fcdb68"} Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.604003 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.607712 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" event={"ID":"241e6742-0057-4919-94ff-1653ba2ebeba","Type":"ContainerStarted","Data":"36e37ef29801da0acdf9942570f2f0775c61b020fe972035aaf1f254fb8a6517"} Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.640837 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" podStartSLOduration=2.619533636 podStartE2EDuration="4.640803572s" podCreationTimestamp="2026-02-26 08:30:39 +0000 UTC" firstStartedPulling="2026-02-26 08:30:41.040477305 +0000 UTC m=+1076.036414702" lastFinishedPulling="2026-02-26 08:30:43.061747211 +0000 UTC m=+1078.057684638" observedRunningTime="2026-02-26 08:30:43.621290686 +0000 UTC m=+1078.617228083" watchObservedRunningTime="2026-02-26 08:30:43.640803572 +0000 UTC m=+1078.636741129" Feb 26 08:30:43 crc kubenswrapper[4741]: I0226 08:30:43.643545 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lc9nj" podStartSLOduration=1.6321250840000001 podStartE2EDuration="4.643530919s" podCreationTimestamp="2026-02-26 08:30:39 +0000 UTC" firstStartedPulling="2026-02-26 08:30:40.04555555 +0000 UTC m=+1075.041492937" lastFinishedPulling="2026-02-26 08:30:43.056961365 +0000 UTC m=+1078.052898772" observedRunningTime="2026-02-26 08:30:43.639846344 +0000 UTC m=+1078.635783751" watchObservedRunningTime="2026-02-26 08:30:43.643530919 +0000 UTC m=+1078.639468306" Feb 26 08:30:45 crc kubenswrapper[4741]: I0226 08:30:45.641480 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" event={"ID":"1b75dec6-9aac-4c45-aab5-5a08eed4baa5","Type":"ContainerStarted","Data":"3248c5e96d74051742967d60c3b285b48616a98a336b8a0b72bfedfc073c49c2"} Feb 26 08:30:45 crc kubenswrapper[4741]: I0226 08:30:45.673522 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-lbpr6" podStartSLOduration=3.254668216 podStartE2EDuration="6.673492633s" podCreationTimestamp="2026-02-26 08:30:39 +0000 UTC" firstStartedPulling="2026-02-26 08:30:41.020305161 +0000 UTC m=+1076.016242548" lastFinishedPulling="2026-02-26 08:30:44.439129568 +0000 UTC m=+1079.435066965" observedRunningTime="2026-02-26 08:30:45.673301227 +0000 UTC m=+1080.669238704" watchObservedRunningTime="2026-02-26 08:30:45.673492633 +0000 UTC m=+1080.669430020" Feb 26 08:30:48 crc kubenswrapper[4741]: I0226 08:30:48.678513 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" event={"ID":"241e6742-0057-4919-94ff-1653ba2ebeba","Type":"ContainerStarted","Data":"26f7c43485d5ee4eb7eb4b877e966cc152fb0270ad800d03f9f5f1e8bf440da8"} Feb 26 08:30:48 crc kubenswrapper[4741]: I0226 08:30:48.702750 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-fl4pn" podStartSLOduration=1.813983305 podStartE2EDuration="9.702727274s" podCreationTimestamp="2026-02-26 08:30:39 +0000 UTC" firstStartedPulling="2026-02-26 08:30:40.476271537 +0000 UTC m=+1075.472208924" lastFinishedPulling="2026-02-26 08:30:48.365015506 +0000 UTC m=+1083.360952893" observedRunningTime="2026-02-26 08:30:48.69518768 +0000 UTC m=+1083.691125067" watchObservedRunningTime="2026-02-26 08:30:48.702727274 +0000 UTC m=+1083.698664661" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.018190 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lc9nj" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.364651 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.364737 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.371231 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.721553 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:30:50 crc kubenswrapper[4741]: I0226 08:30:50.807242 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:31:00 crc kubenswrapper[4741]: I0226 08:31:00.558845 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" Feb 26 08:31:15 crc kubenswrapper[4741]: I0226 08:31:15.852318 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b59954f46-5k6gz" podUID="58b8cb21-1eba-4ae6-84d5-64c306112b53" containerName="console" containerID="cri-o://9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49" gracePeriod=15 Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.377805 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b59954f46-5k6gz_58b8cb21-1eba-4ae6-84d5-64c306112b53/console/0.log" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.378951 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479569 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479690 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479725 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479768 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khvtc\" (UniqueName: \"kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479843 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479943 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.479979 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config\") pod \"58b8cb21-1eba-4ae6-84d5-64c306112b53\" (UID: \"58b8cb21-1eba-4ae6-84d5-64c306112b53\") " Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.480887 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.481299 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config" (OuterVolumeSpecName: "console-config") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.481462 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca" (OuterVolumeSpecName: "service-ca") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.481483 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.487126 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.488098 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc" (OuterVolumeSpecName: "kube-api-access-khvtc") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "kube-api-access-khvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.492393 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "58b8cb21-1eba-4ae6-84d5-64c306112b53" (UID: "58b8cb21-1eba-4ae6-84d5-64c306112b53"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.581624 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582395 4741 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582416 4741 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582432 4741 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582447 4741 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58b8cb21-1eba-4ae6-84d5-64c306112b53-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582462 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khvtc\" (UniqueName: \"kubernetes.io/projected/58b8cb21-1eba-4ae6-84d5-64c306112b53-kube-api-access-khvtc\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.582481 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58b8cb21-1eba-4ae6-84d5-64c306112b53-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990544 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b59954f46-5k6gz_58b8cb21-1eba-4ae6-84d5-64c306112b53/console/0.log" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990611 4741 generic.go:334] "Generic (PLEG): container finished" podID="58b8cb21-1eba-4ae6-84d5-64c306112b53" containerID="9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49" exitCode=2 Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990649 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b59954f46-5k6gz" event={"ID":"58b8cb21-1eba-4ae6-84d5-64c306112b53","Type":"ContainerDied","Data":"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49"} Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990686 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b59954f46-5k6gz" event={"ID":"58b8cb21-1eba-4ae6-84d5-64c306112b53","Type":"ContainerDied","Data":"f867639c9e87f942f7e1701e448f13a65d09401363826b9fa588a75145974ea4"} Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990710 4741 scope.go:117] "RemoveContainer" containerID="9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49" Feb 26 08:31:16 crc kubenswrapper[4741]: I0226 08:31:16.990712 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b59954f46-5k6gz" Feb 26 08:31:17 crc kubenswrapper[4741]: I0226 08:31:17.032465 4741 scope.go:117] "RemoveContainer" containerID="9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49" Feb 26 08:31:17 crc kubenswrapper[4741]: E0226 08:31:17.033137 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49\": container with ID starting with 9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49 not found: ID does not exist" containerID="9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49" Feb 26 08:31:17 crc kubenswrapper[4741]: I0226 08:31:17.033179 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49"} err="failed to get container status \"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49\": rpc error: code = NotFound desc = could not find container \"9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49\": container with ID starting with 9f132c4e068e18a809d74fba42aaa0236d37e9e14e406f18c755c180a2635f49 not found: ID does not exist" Feb 26 08:31:17 crc kubenswrapper[4741]: I0226 08:31:17.037536 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:31:17 crc kubenswrapper[4741]: I0226 08:31:17.043496 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b59954f46-5k6gz"] Feb 26 08:31:17 crc kubenswrapper[4741]: I0226 08:31:17.812702 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b8cb21-1eba-4ae6-84d5-64c306112b53" path="/var/lib/kubelet/pods/58b8cb21-1eba-4ae6-84d5-64c306112b53/volumes" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.274037 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g"] Feb 26 08:31:20 crc kubenswrapper[4741]: E0226 08:31:20.276846 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b8cb21-1eba-4ae6-84d5-64c306112b53" containerName="console" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.276950 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b8cb21-1eba-4ae6-84d5-64c306112b53" containerName="console" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.277312 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b8cb21-1eba-4ae6-84d5-64c306112b53" containerName="console" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.278754 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.282375 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.287666 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g"] Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.366006 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5psd\" (UniqueName: \"kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.366389 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.366416 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.469773 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.469838 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.469982 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5psd\" (UniqueName: \"kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.470588 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.470896 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.498625 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5psd\" (UniqueName: \"kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:20 crc kubenswrapper[4741]: I0226 08:31:20.605368 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:21 crc kubenswrapper[4741]: I0226 08:31:21.084346 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g"] Feb 26 08:31:22 crc kubenswrapper[4741]: I0226 08:31:22.041582 4741 generic.go:334] "Generic (PLEG): container finished" podID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerID="8162ef4fd0040c3d90d4a213e4e85f3ff01850f0c4cbb273a64c04da41bbff33" exitCode=0 Feb 26 08:31:22 crc kubenswrapper[4741]: I0226 08:31:22.042325 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" event={"ID":"8a5cc643-d3e2-4726-aac9-845145612f0e","Type":"ContainerDied","Data":"8162ef4fd0040c3d90d4a213e4e85f3ff01850f0c4cbb273a64c04da41bbff33"} Feb 26 08:31:22 crc kubenswrapper[4741]: I0226 08:31:22.042391 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" event={"ID":"8a5cc643-d3e2-4726-aac9-845145612f0e","Type":"ContainerStarted","Data":"4eb2e21e86d72791430ac581d33ed8cea05f0af7b78b95e5128d64c7bb796a32"} Feb 26 08:31:26 crc kubenswrapper[4741]: I0226 08:31:26.099705 4741 generic.go:334] "Generic (PLEG): container finished" podID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerID="49838cbb4aefb2502a6a17942525c837f53d64ec8b2e0fd262b4efec57afc9a3" exitCode=0 Feb 26 08:31:26 crc kubenswrapper[4741]: I0226 08:31:26.099929 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" event={"ID":"8a5cc643-d3e2-4726-aac9-845145612f0e","Type":"ContainerDied","Data":"49838cbb4aefb2502a6a17942525c837f53d64ec8b2e0fd262b4efec57afc9a3"} Feb 26 08:31:27 crc kubenswrapper[4741]: I0226 08:31:27.115255 4741 generic.go:334] "Generic (PLEG): container finished" podID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerID="c3553e791ef56a556b3cbf988457b9fefe1219667a7fad280a33ad281f316d34" exitCode=0 Feb 26 08:31:27 crc kubenswrapper[4741]: I0226 08:31:27.115376 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" event={"ID":"8a5cc643-d3e2-4726-aac9-845145612f0e","Type":"ContainerDied","Data":"c3553e791ef56a556b3cbf988457b9fefe1219667a7fad280a33ad281f316d34"} Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.610535 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.736231 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util\") pod \"8a5cc643-d3e2-4726-aac9-845145612f0e\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.736431 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5psd\" (UniqueName: \"kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd\") pod \"8a5cc643-d3e2-4726-aac9-845145612f0e\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.736569 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle\") pod \"8a5cc643-d3e2-4726-aac9-845145612f0e\" (UID: \"8a5cc643-d3e2-4726-aac9-845145612f0e\") " Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.740241 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle" (OuterVolumeSpecName: "bundle") pod "8a5cc643-d3e2-4726-aac9-845145612f0e" (UID: "8a5cc643-d3e2-4726-aac9-845145612f0e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.748142 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd" (OuterVolumeSpecName: "kube-api-access-d5psd") pod "8a5cc643-d3e2-4726-aac9-845145612f0e" (UID: "8a5cc643-d3e2-4726-aac9-845145612f0e"). InnerVolumeSpecName "kube-api-access-d5psd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.757912 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util" (OuterVolumeSpecName: "util") pod "8a5cc643-d3e2-4726-aac9-845145612f0e" (UID: "8a5cc643-d3e2-4726-aac9-845145612f0e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.839485 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5psd\" (UniqueName: \"kubernetes.io/projected/8a5cc643-d3e2-4726-aac9-845145612f0e-kube-api-access-d5psd\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.839526 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:28 crc kubenswrapper[4741]: I0226 08:31:28.839536 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a5cc643-d3e2-4726-aac9-845145612f0e-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:31:29 crc kubenswrapper[4741]: I0226 08:31:29.136654 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" event={"ID":"8a5cc643-d3e2-4726-aac9-845145612f0e","Type":"ContainerDied","Data":"4eb2e21e86d72791430ac581d33ed8cea05f0af7b78b95e5128d64c7bb796a32"} Feb 26 08:31:29 crc kubenswrapper[4741]: I0226 08:31:29.137034 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb2e21e86d72791430ac581d33ed8cea05f0af7b78b95e5128d64c7bb796a32" Feb 26 08:31:29 crc kubenswrapper[4741]: I0226 08:31:29.136747 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.504930 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq"] Feb 26 08:31:39 crc kubenswrapper[4741]: E0226 08:31:39.506197 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="pull" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.506214 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="pull" Feb 26 08:31:39 crc kubenswrapper[4741]: E0226 08:31:39.506226 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="util" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.506233 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="util" Feb 26 08:31:39 crc kubenswrapper[4741]: E0226 08:31:39.506249 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="extract" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.506258 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="extract" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.506453 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a5cc643-d3e2-4726-aac9-845145612f0e" containerName="extract" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.507251 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.510265 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.510283 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.510365 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.510571 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.510785 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nzfqk" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.529040 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq"] Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.649339 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-apiservice-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.649734 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfb26\" (UniqueName: \"kubernetes.io/projected/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-kube-api-access-xfb26\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.649913 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-webhook-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.751649 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfb26\" (UniqueName: \"kubernetes.io/projected/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-kube-api-access-xfb26\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.751748 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-webhook-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.751806 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-apiservice-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.764554 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-webhook-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.773890 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfb26\" (UniqueName: \"kubernetes.io/projected/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-kube-api-access-xfb26\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.774797 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb123f4a-b95e-413e-8d1b-a5efc5cbacdd-apiservice-cert\") pod \"metallb-operator-controller-manager-64545648d6-pt5sq\" (UID: \"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd\") " pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.832790 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.865723 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm"] Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.879498 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.883806 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.883906 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.883811 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lm82r" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.889800 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm"] Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.955879 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-webhook-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.955933 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7g9\" (UniqueName: \"kubernetes.io/projected/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-kube-api-access-nl7g9\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:39 crc kubenswrapper[4741]: I0226 08:31:39.955966 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-apiservice-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.060680 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-webhook-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.065945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7g9\" (UniqueName: \"kubernetes.io/projected/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-kube-api-access-nl7g9\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.065994 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-apiservice-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.109457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7g9\" (UniqueName: \"kubernetes.io/projected/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-kube-api-access-nl7g9\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.133885 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-webhook-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.135450 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85855f0c-ab53-44f0-8f0d-ac0299c5fc24-apiservice-cert\") pod \"metallb-operator-webhook-server-76f9f89dd9-x2csm\" (UID: \"85855f0c-ab53-44f0-8f0d-ac0299c5fc24\") " pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.257975 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:40 crc kubenswrapper[4741]: I0226 08:31:40.588063 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq"] Feb 26 08:31:41 crc kubenswrapper[4741]: I0226 08:31:41.055751 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm"] Feb 26 08:31:41 crc kubenswrapper[4741]: W0226 08:31:41.057007 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85855f0c_ab53_44f0_8f0d_ac0299c5fc24.slice/crio-0de7d472b70763c12e8da14d4a56a59e2cd5cb16ec11e2c5a4041a1e94bb7037 WatchSource:0}: Error finding container 0de7d472b70763c12e8da14d4a56a59e2cd5cb16ec11e2c5a4041a1e94bb7037: Status 404 returned error can't find the container with id 0de7d472b70763c12e8da14d4a56a59e2cd5cb16ec11e2c5a4041a1e94bb7037 Feb 26 08:31:41 crc kubenswrapper[4741]: I0226 08:31:41.515487 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" event={"ID":"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd","Type":"ContainerStarted","Data":"9cf2ea4cf0cf59b3a4dc5dc5f6be0c57d258c48ed887ecde3275e64cf40ddf11"} Feb 26 08:31:41 crc kubenswrapper[4741]: I0226 08:31:41.517449 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" event={"ID":"85855f0c-ab53-44f0-8f0d-ac0299c5fc24","Type":"ContainerStarted","Data":"0de7d472b70763c12e8da14d4a56a59e2cd5cb16ec11e2c5a4041a1e94bb7037"} Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.632890 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" event={"ID":"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd","Type":"ContainerStarted","Data":"8901fcd05aa6e6c22d4274e4ab37a8affbe4ae042c6602049f7ecf414d5cb2c1"} Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.634012 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.639933 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" event={"ID":"85855f0c-ab53-44f0-8f0d-ac0299c5fc24","Type":"ContainerStarted","Data":"b1e546f4c34d3f4d2bc4ffe634c50048e1788817baf773f121bb36fa36fc54ff"} Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.640125 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.660816 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" podStartSLOduration=3.257651233 podStartE2EDuration="14.660792738s" podCreationTimestamp="2026-02-26 08:31:39 +0000 UTC" firstStartedPulling="2026-02-26 08:31:40.597203269 +0000 UTC m=+1135.593140656" lastFinishedPulling="2026-02-26 08:31:52.000344764 +0000 UTC m=+1146.996282161" observedRunningTime="2026-02-26 08:31:53.659024358 +0000 UTC m=+1148.654961755" watchObservedRunningTime="2026-02-26 08:31:53.660792738 +0000 UTC m=+1148.656730125" Feb 26 08:31:53 crc kubenswrapper[4741]: I0226 08:31:53.703316 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" podStartSLOduration=3.734300309 podStartE2EDuration="14.70329393s" podCreationTimestamp="2026-02-26 08:31:39 +0000 UTC" firstStartedPulling="2026-02-26 08:31:41.06093068 +0000 UTC m=+1136.056868077" lastFinishedPulling="2026-02-26 08:31:52.029924301 +0000 UTC m=+1147.025861698" observedRunningTime="2026-02-26 08:31:53.697766064 +0000 UTC m=+1148.693703451" watchObservedRunningTime="2026-02-26 08:31:53.70329393 +0000 UTC m=+1148.699231317" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.143611 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534912-tnzrt"] Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.145686 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.148170 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.148552 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.149449 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.157740 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534912-tnzrt"] Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.277665 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7lz5\" (UniqueName: \"kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5\") pod \"auto-csr-approver-29534912-tnzrt\" (UID: \"e1907a87-1ff8-4d9b-a81e-05f26f468a4d\") " pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.379573 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7lz5\" (UniqueName: \"kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5\") pod \"auto-csr-approver-29534912-tnzrt\" (UID: \"e1907a87-1ff8-4d9b-a81e-05f26f468a4d\") " pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.409479 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7lz5\" (UniqueName: \"kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5\") pod \"auto-csr-approver-29534912-tnzrt\" (UID: \"e1907a87-1ff8-4d9b-a81e-05f26f468a4d\") " pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:00 crc kubenswrapper[4741]: I0226 08:32:00.529605 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:01 crc kubenswrapper[4741]: I0226 08:32:01.043557 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534912-tnzrt"] Feb 26 08:32:01 crc kubenswrapper[4741]: W0226 08:32:01.051917 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1907a87_1ff8_4d9b_a81e_05f26f468a4d.slice/crio-becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6 WatchSource:0}: Error finding container becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6: Status 404 returned error can't find the container with id becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6 Feb 26 08:32:01 crc kubenswrapper[4741]: I0226 08:32:01.730262 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" event={"ID":"e1907a87-1ff8-4d9b-a81e-05f26f468a4d","Type":"ContainerStarted","Data":"becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6"} Feb 26 08:32:04 crc kubenswrapper[4741]: I0226 08:32:04.759998 4741 generic.go:334] "Generic (PLEG): container finished" podID="e1907a87-1ff8-4d9b-a81e-05f26f468a4d" containerID="fd3adee9dd064679bf59a9b498991c55a5c0fbf66cfd63676141afa04ab9bf13" exitCode=0 Feb 26 08:32:04 crc kubenswrapper[4741]: I0226 08:32:04.760169 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" event={"ID":"e1907a87-1ff8-4d9b-a81e-05f26f468a4d","Type":"ContainerDied","Data":"fd3adee9dd064679bf59a9b498991c55a5c0fbf66cfd63676141afa04ab9bf13"} Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.092607 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.198234 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7lz5\" (UniqueName: \"kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5\") pod \"e1907a87-1ff8-4d9b-a81e-05f26f468a4d\" (UID: \"e1907a87-1ff8-4d9b-a81e-05f26f468a4d\") " Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.204736 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5" (OuterVolumeSpecName: "kube-api-access-h7lz5") pod "e1907a87-1ff8-4d9b-a81e-05f26f468a4d" (UID: "e1907a87-1ff8-4d9b-a81e-05f26f468a4d"). InnerVolumeSpecName "kube-api-access-h7lz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.300972 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7lz5\" (UniqueName: \"kubernetes.io/projected/e1907a87-1ff8-4d9b-a81e-05f26f468a4d-kube-api-access-h7lz5\") on node \"crc\" DevicePath \"\"" Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.781995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" event={"ID":"e1907a87-1ff8-4d9b-a81e-05f26f468a4d","Type":"ContainerDied","Data":"becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6"} Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.782038 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534912-tnzrt" Feb 26 08:32:06 crc kubenswrapper[4741]: I0226 08:32:06.782049 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="becddd34402d4f1fcb33247195c51e3126f923599cc94488ede6dfde6d1159d6" Feb 26 08:32:07 crc kubenswrapper[4741]: I0226 08:32:07.165419 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534906-w9g6b"] Feb 26 08:32:07 crc kubenswrapper[4741]: I0226 08:32:07.175006 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534906-w9g6b"] Feb 26 08:32:07 crc kubenswrapper[4741]: I0226 08:32:07.797010 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6205896b-ec19-465c-b910-187543c44ddd" path="/var/lib/kubelet/pods/6205896b-ec19-465c-b910-187543c44ddd/volumes" Feb 26 08:32:10 crc kubenswrapper[4741]: I0226 08:32:10.264037 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" Feb 26 08:32:25 crc kubenswrapper[4741]: I0226 08:32:25.149589 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:32:25 crc kubenswrapper[4741]: I0226 08:32:25.150368 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:32:29 crc kubenswrapper[4741]: I0226 08:32:29.836956 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.597872 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-f4pcr"] Feb 26 08:32:30 crc kubenswrapper[4741]: E0226 08:32:30.598306 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1907a87-1ff8-4d9b-a81e-05f26f468a4d" containerName="oc" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.598330 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1907a87-1ff8-4d9b-a81e-05f26f468a4d" containerName="oc" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.598491 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1907a87-1ff8-4d9b-a81e-05f26f468a4d" containerName="oc" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.654976 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb"] Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.656033 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.657259 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.665764 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.666414 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6f45l" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.666481 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.666505 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.687472 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb"] Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.765757 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tdw95"] Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.768283 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tdw95" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.772772 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v5xfv" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.772772 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.773679 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.776222 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.811372 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-8nx8x"] Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.813626 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.829235 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.829405 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-8nx8x"] Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.834435 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87fc\" (UniqueName: \"kubernetes.io/projected/6fe5145b-bbf9-47ac-b53e-1282479db87d-kube-api-access-p87fc\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.834625 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-startup\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.834726 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-reloader\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.834829 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-conf\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.834904 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.835027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics-certs\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.835163 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcgj\" (UniqueName: \"kubernetes.io/projected/a2705136-6518-4339-b135-2d6f71d0fe6b-kube-api-access-dpcgj\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.835260 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-sockets\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.835358 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe5145b-bbf9-47ac-b53e-1282479db87d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936824 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-startup\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936895 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-metrics-certs\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936925 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-reloader\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936951 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-conf\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.936990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qv7x\" (UniqueName: \"kubernetes.io/projected/4bf58d3b-55b2-408e-ab70-84e96ef92a64-kube-api-access-5qv7x\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937032 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metallb-excludel2\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937060 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics-certs\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937074 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-cert\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937138 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcgj\" (UniqueName: \"kubernetes.io/projected/a2705136-6518-4339-b135-2d6f71d0fe6b-kube-api-access-dpcgj\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937162 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-sockets\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937201 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsx9x\" (UniqueName: \"kubernetes.io/projected/468a5a70-08db-488d-9f31-f9835091c5ee-kube-api-access-wsx9x\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937222 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe5145b-bbf9-47ac-b53e-1282479db87d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937239 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.937282 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87fc\" (UniqueName: \"kubernetes.io/projected/6fe5145b-bbf9-47ac-b53e-1282479db87d-kube-api-access-p87fc\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.938032 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-startup\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.938339 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-reloader\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.938516 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-conf\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.938890 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.939141 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a2705136-6518-4339-b135-2d6f71d0fe6b-frr-sockets\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.948046 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2705136-6518-4339-b135-2d6f71d0fe6b-metrics-certs\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.949749 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe5145b-bbf9-47ac-b53e-1282479db87d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.961181 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87fc\" (UniqueName: \"kubernetes.io/projected/6fe5145b-bbf9-47ac-b53e-1282479db87d-kube-api-access-p87fc\") pod \"frr-k8s-webhook-server-7f989f654f-mcrfb\" (UID: \"6fe5145b-bbf9-47ac-b53e-1282479db87d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:30 crc kubenswrapper[4741]: I0226 08:32:30.972989 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcgj\" (UniqueName: \"kubernetes.io/projected/a2705136-6518-4339-b135-2d6f71d0fe6b-kube-api-access-dpcgj\") pod \"frr-k8s-f4pcr\" (UID: \"a2705136-6518-4339-b135-2d6f71d0fe6b\") " pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.032734 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039472 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qv7x\" (UniqueName: \"kubernetes.io/projected/4bf58d3b-55b2-408e-ab70-84e96ef92a64-kube-api-access-5qv7x\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039587 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metallb-excludel2\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039638 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-cert\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039721 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsx9x\" (UniqueName: \"kubernetes.io/projected/468a5a70-08db-488d-9f31-f9835091c5ee-kube-api-access-wsx9x\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039754 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.039866 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-metrics-certs\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.039973 4741 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.040059 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs podName:4bf58d3b-55b2-408e-ab70-84e96ef92a64 nodeName:}" failed. No retries permitted until 2026-02-26 08:32:31.540034793 +0000 UTC m=+1186.535972170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs") pod "speaker-tdw95" (UID: "4bf58d3b-55b2-408e-ab70-84e96ef92a64") : secret "speaker-certs-secret" not found Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.040055 4741 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.040167 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist podName:4bf58d3b-55b2-408e-ab70-84e96ef92a64 nodeName:}" failed. No retries permitted until 2026-02-26 08:32:31.540143636 +0000 UTC m=+1186.536081013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist") pod "speaker-tdw95" (UID: "4bf58d3b-55b2-408e-ab70-84e96ef92a64") : secret "metallb-memberlist" not found Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.040929 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metallb-excludel2\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.044374 4741 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.045410 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-metrics-certs\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.052680 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.059679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468a5a70-08db-488d-9f31-f9835091c5ee-cert\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.065970 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qv7x\" (UniqueName: \"kubernetes.io/projected/4bf58d3b-55b2-408e-ab70-84e96ef92a64-kube-api-access-5qv7x\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.080029 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsx9x\" (UniqueName: \"kubernetes.io/projected/468a5a70-08db-488d-9f31-f9835091c5ee-kube-api-access-wsx9x\") pod \"controller-86ddb6bd46-8nx8x\" (UID: \"468a5a70-08db-488d-9f31-f9835091c5ee\") " pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.148896 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.347015 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.552250 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.552307 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.552568 4741 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 08:32:31 crc kubenswrapper[4741]: E0226 08:32:31.552651 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist podName:4bf58d3b-55b2-408e-ab70-84e96ef92a64 nodeName:}" failed. No retries permitted until 2026-02-26 08:32:32.552630579 +0000 UTC m=+1187.548567966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist") pod "speaker-tdw95" (UID: "4bf58d3b-55b2-408e-ab70-84e96ef92a64") : secret "metallb-memberlist" not found Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.559563 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-metrics-certs\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.580777 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"705858db391320bc2bcb7d77117b2c93f8b275b64d83c6d4cf9b4aafd9883021"} Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.635748 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb"] Feb 26 08:32:31 crc kubenswrapper[4741]: I0226 08:32:31.729326 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-8nx8x"] Feb 26 08:32:31 crc kubenswrapper[4741]: W0226 08:32:31.731904 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod468a5a70_08db_488d_9f31_f9835091c5ee.slice/crio-a597a4c3c96182040d97a0da0fc6dd86c213a41f5c6c42006b7372cbd1637761 WatchSource:0}: Error finding container a597a4c3c96182040d97a0da0fc6dd86c213a41f5c6c42006b7372cbd1637761: Status 404 returned error can't find the container with id a597a4c3c96182040d97a0da0fc6dd86c213a41f5c6c42006b7372cbd1637761 Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.573183 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.590284 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bf58d3b-55b2-408e-ab70-84e96ef92a64-memberlist\") pod \"speaker-tdw95\" (UID: \"4bf58d3b-55b2-408e-ab70-84e96ef92a64\") " pod="metallb-system/speaker-tdw95" Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.591957 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-8nx8x" event={"ID":"468a5a70-08db-488d-9f31-f9835091c5ee","Type":"ContainerStarted","Data":"5865faf43ad33180793a47802be0bbfadf0e5f54169997dc994fa21d79dc3bec"} Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.592022 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-8nx8x" event={"ID":"468a5a70-08db-488d-9f31-f9835091c5ee","Type":"ContainerStarted","Data":"7b0e57dc2216e4f6a2cf64bb956b5fdaaa7be70bcdfea4f6c191ea9a736d7d93"} Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.592066 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-8nx8x" event={"ID":"468a5a70-08db-488d-9f31-f9835091c5ee","Type":"ContainerStarted","Data":"a597a4c3c96182040d97a0da0fc6dd86c213a41f5c6c42006b7372cbd1637761"} Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.592200 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.593558 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" event={"ID":"6fe5145b-bbf9-47ac-b53e-1282479db87d","Type":"ContainerStarted","Data":"355f296e91ae5d9590840087140096f0934b121b7875ed6505e86f93ad1abde8"} Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.613435 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tdw95" Feb 26 08:32:32 crc kubenswrapper[4741]: I0226 08:32:32.628784 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-8nx8x" podStartSLOduration=2.628746671 podStartE2EDuration="2.628746671s" podCreationTimestamp="2026-02-26 08:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:32:32.616818714 +0000 UTC m=+1187.612756111" watchObservedRunningTime="2026-02-26 08:32:32.628746671 +0000 UTC m=+1187.624684058" Feb 26 08:32:33 crc kubenswrapper[4741]: I0226 08:32:33.617318 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tdw95" event={"ID":"4bf58d3b-55b2-408e-ab70-84e96ef92a64","Type":"ContainerStarted","Data":"7c2d0906b16621778600eaa3e77a6dfb1201327a35b39a3017ae8a39559d89b0"} Feb 26 08:32:33 crc kubenswrapper[4741]: I0226 08:32:33.617818 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tdw95" event={"ID":"4bf58d3b-55b2-408e-ab70-84e96ef92a64","Type":"ContainerStarted","Data":"2fc721c88def4f7ed48ca53e7bb4d4159b75b385d2d6487b39bbdcac858ac291"} Feb 26 08:32:33 crc kubenswrapper[4741]: I0226 08:32:33.617831 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tdw95" event={"ID":"4bf58d3b-55b2-408e-ab70-84e96ef92a64","Type":"ContainerStarted","Data":"af8175b78c608d284377b893151e2133a8f0f3acc6d4e29fdddfaaab086d2d63"} Feb 26 08:32:33 crc kubenswrapper[4741]: I0226 08:32:33.618662 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tdw95" Feb 26 08:32:33 crc kubenswrapper[4741]: I0226 08:32:33.650224 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tdw95" podStartSLOduration=3.650200698 podStartE2EDuration="3.650200698s" podCreationTimestamp="2026-02-26 08:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:32:33.64782342 +0000 UTC m=+1188.643760807" watchObservedRunningTime="2026-02-26 08:32:33.650200698 +0000 UTC m=+1188.646138085" Feb 26 08:32:41 crc kubenswrapper[4741]: I0226 08:32:41.187701 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-8nx8x" Feb 26 08:32:42 crc kubenswrapper[4741]: I0226 08:32:42.618681 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tdw95" Feb 26 08:32:43 crc kubenswrapper[4741]: I0226 08:32:43.720721 4741 generic.go:334] "Generic (PLEG): container finished" podID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerID="6761063c5ed21d1447ee217a5157024814c4cdda71a753dff062aa0a8240084e" exitCode=0 Feb 26 08:32:43 crc kubenswrapper[4741]: I0226 08:32:43.720882 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerDied","Data":"6761063c5ed21d1447ee217a5157024814c4cdda71a753dff062aa0a8240084e"} Feb 26 08:32:43 crc kubenswrapper[4741]: I0226 08:32:43.723017 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" event={"ID":"6fe5145b-bbf9-47ac-b53e-1282479db87d","Type":"ContainerStarted","Data":"efcfb7ec5e49e6431552b53e390255b1fe83dde88ce9b97ab41e902ba1bf6006"} Feb 26 08:32:43 crc kubenswrapper[4741]: I0226 08:32:43.723185 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:32:43 crc kubenswrapper[4741]: I0226 08:32:43.806561 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" podStartSLOduration=2.002133391 podStartE2EDuration="13.806535354s" podCreationTimestamp="2026-02-26 08:32:30 +0000 UTC" firstStartedPulling="2026-02-26 08:32:31.644404474 +0000 UTC m=+1186.640341861" lastFinishedPulling="2026-02-26 08:32:43.448806437 +0000 UTC m=+1198.444743824" observedRunningTime="2026-02-26 08:32:43.800207675 +0000 UTC m=+1198.796145062" watchObservedRunningTime="2026-02-26 08:32:43.806535354 +0000 UTC m=+1198.802472741" Feb 26 08:32:44 crc kubenswrapper[4741]: I0226 08:32:44.740758 4741 generic.go:334] "Generic (PLEG): container finished" podID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerID="130ec64480bdd4f572b9c76fb044e19448d9852c33b9e275e9363e8fb2952a92" exitCode=0 Feb 26 08:32:44 crc kubenswrapper[4741]: I0226 08:32:44.740899 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerDied","Data":"130ec64480bdd4f572b9c76fb044e19448d9852c33b9e275e9363e8fb2952a92"} Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.466733 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.468036 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.471084 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wq7c4" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.471516 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.471509 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.471812 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7ftk\" (UniqueName: \"kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk\") pod \"openstack-operator-index-g7rkn\" (UID: \"fcb06827-ed6b-45ff-b0f8-fc8103058d6f\") " pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.538163 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.573835 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7ftk\" (UniqueName: \"kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk\") pod \"openstack-operator-index-g7rkn\" (UID: \"fcb06827-ed6b-45ff-b0f8-fc8103058d6f\") " pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.604628 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7ftk\" (UniqueName: \"kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk\") pod \"openstack-operator-index-g7rkn\" (UID: \"fcb06827-ed6b-45ff-b0f8-fc8103058d6f\") " pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.753815 4741 generic.go:334] "Generic (PLEG): container finished" podID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerID="990a6fb06357e62413bce9f039e306968196a7c549ac9ac3744ff74e86859fe5" exitCode=0 Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.753889 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerDied","Data":"990a6fb06357e62413bce9f039e306968196a7c549ac9ac3744ff74e86859fe5"} Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.859259 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wq7c4" Feb 26 08:32:45 crc kubenswrapper[4741]: I0226 08:32:45.867333 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:32:46 crc kubenswrapper[4741]: I0226 08:32:46.337145 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:32:46 crc kubenswrapper[4741]: I0226 08:32:46.764825 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g7rkn" event={"ID":"fcb06827-ed6b-45ff-b0f8-fc8103058d6f","Type":"ContainerStarted","Data":"b0c2b7a68a514544e615c61d5a543cdf199ced26bdec52ac757e1ee5e33a8a4d"} Feb 26 08:32:46 crc kubenswrapper[4741]: I0226 08:32:46.770440 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"43a8807fede47d23a802d95bf6d2298af56092890d654566adcc0137f9cc9bc7"} Feb 26 08:32:46 crc kubenswrapper[4741]: I0226 08:32:46.770476 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"b63f7ba609d315bacc63e0980c17a018b3233a7e243ce73658ffbae10a9f3415"} Feb 26 08:32:46 crc kubenswrapper[4741]: I0226 08:32:46.770489 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"e272f12077d16e6b032f68a43293190cf9f21e65f0b7ddec9e686a195d3561d0"} Feb 26 08:32:47 crc kubenswrapper[4741]: I0226 08:32:47.784872 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"4ff9c00cfcdaf7017ddb1690fd5148ee237b8d742187dc9e6a81962aaf27106b"} Feb 26 08:32:47 crc kubenswrapper[4741]: I0226 08:32:47.784922 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"33d5eea0080706dc5588cdc30abd8cb1ca59473d1578d987c6e11a5be6b65d3f"} Feb 26 08:32:47 crc kubenswrapper[4741]: I0226 08:32:47.784931 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"696d43a350a5cde9c6509b148145f8bbbf1b3417e838cde28d927e233d1cd347"} Feb 26 08:32:47 crc kubenswrapper[4741]: I0226 08:32:47.785075 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:48 crc kubenswrapper[4741]: I0226 08:32:48.835617 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-f4pcr" podStartSLOduration=6.766552766 podStartE2EDuration="18.835590403s" podCreationTimestamp="2026-02-26 08:32:30 +0000 UTC" firstStartedPulling="2026-02-26 08:32:31.346616433 +0000 UTC m=+1186.342553820" lastFinishedPulling="2026-02-26 08:32:43.41565406 +0000 UTC m=+1198.411591457" observedRunningTime="2026-02-26 08:32:47.812513861 +0000 UTC m=+1202.808451258" watchObservedRunningTime="2026-02-26 08:32:48.835590403 +0000 UTC m=+1203.831527790" Feb 26 08:32:48 crc kubenswrapper[4741]: I0226 08:32:48.841818 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.250578 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-thrfc"] Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.253745 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.270734 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dft\" (UniqueName: \"kubernetes.io/projected/2fcbc58b-4880-4c34-8d80-16c8be56db58-kube-api-access-59dft\") pod \"openstack-operator-index-thrfc\" (UID: \"2fcbc58b-4880-4c34-8d80-16c8be56db58\") " pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.282429 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-thrfc"] Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.373717 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dft\" (UniqueName: \"kubernetes.io/projected/2fcbc58b-4880-4c34-8d80-16c8be56db58-kube-api-access-59dft\") pod \"openstack-operator-index-thrfc\" (UID: \"2fcbc58b-4880-4c34-8d80-16c8be56db58\") " pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.411302 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dft\" (UniqueName: \"kubernetes.io/projected/2fcbc58b-4880-4c34-8d80-16c8be56db58-kube-api-access-59dft\") pod \"openstack-operator-index-thrfc\" (UID: \"2fcbc58b-4880-4c34-8d80-16c8be56db58\") " pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:32:49 crc kubenswrapper[4741]: I0226 08:32:49.579687 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:32:51 crc kubenswrapper[4741]: I0226 08:32:51.034103 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:51 crc kubenswrapper[4741]: I0226 08:32:51.087427 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:32:51 crc kubenswrapper[4741]: I0226 08:32:51.617328 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-thrfc"] Feb 26 08:32:52 crc kubenswrapper[4741]: W0226 08:32:52.050852 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcbc58b_4880_4c34_8d80_16c8be56db58.slice/crio-558735639f1d9268d077848448f8a9aaed5416cb2cb89808dfbdfd01344cae8d WatchSource:0}: Error finding container 558735639f1d9268d077848448f8a9aaed5416cb2cb89808dfbdfd01344cae8d: Status 404 returned error can't find the container with id 558735639f1d9268d077848448f8a9aaed5416cb2cb89808dfbdfd01344cae8d Feb 26 08:32:52 crc kubenswrapper[4741]: I0226 08:32:52.839365 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-thrfc" event={"ID":"2fcbc58b-4880-4c34-8d80-16c8be56db58","Type":"ContainerStarted","Data":"558735639f1d9268d077848448f8a9aaed5416cb2cb89808dfbdfd01344cae8d"} Feb 26 08:32:55 crc kubenswrapper[4741]: I0226 08:32:55.149173 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:32:55 crc kubenswrapper[4741]: I0226 08:32:55.150654 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:32:59 crc kubenswrapper[4741]: I0226 08:32:59.908543 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-g7rkn" podUID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" containerName="registry-server" containerID="cri-o://8ca4bf11803e59299e69c4b526175e94c1505ef7c1e7e5abc7187ee2253cfb1e" gracePeriod=2 Feb 26 08:32:59 crc kubenswrapper[4741]: I0226 08:32:59.908499 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g7rkn" event={"ID":"fcb06827-ed6b-45ff-b0f8-fc8103058d6f","Type":"ContainerStarted","Data":"8ca4bf11803e59299e69c4b526175e94c1505ef7c1e7e5abc7187ee2253cfb1e"} Feb 26 08:32:59 crc kubenswrapper[4741]: I0226 08:32:59.911219 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-thrfc" event={"ID":"2fcbc58b-4880-4c34-8d80-16c8be56db58","Type":"ContainerStarted","Data":"432e5ca6f23dd0c155b14f0f0e111730ee48a4181a8827f6bcfc2ed65a15e94e"} Feb 26 08:32:59 crc kubenswrapper[4741]: I0226 08:32:59.931943 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g7rkn" podStartSLOduration=2.452276023 podStartE2EDuration="14.931920762s" podCreationTimestamp="2026-02-26 08:32:45 +0000 UTC" firstStartedPulling="2026-02-26 08:32:46.340103842 +0000 UTC m=+1201.336041229" lastFinishedPulling="2026-02-26 08:32:58.819748571 +0000 UTC m=+1213.815685968" observedRunningTime="2026-02-26 08:32:59.929652668 +0000 UTC m=+1214.925590055" watchObservedRunningTime="2026-02-26 08:32:59.931920762 +0000 UTC m=+1214.927858149" Feb 26 08:32:59 crc kubenswrapper[4741]: I0226 08:32:59.949074 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-thrfc" podStartSLOduration=4.182997525 podStartE2EDuration="10.949045496s" podCreationTimestamp="2026-02-26 08:32:49 +0000 UTC" firstStartedPulling="2026-02-26 08:32:52.054102031 +0000 UTC m=+1207.050039428" lastFinishedPulling="2026-02-26 08:32:58.820149972 +0000 UTC m=+1213.816087399" observedRunningTime="2026-02-26 08:32:59.948009527 +0000 UTC m=+1214.943946934" watchObservedRunningTime="2026-02-26 08:32:59.949045496 +0000 UTC m=+1214.944982883" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.922266 4741 generic.go:334] "Generic (PLEG): container finished" podID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" containerID="8ca4bf11803e59299e69c4b526175e94c1505ef7c1e7e5abc7187ee2253cfb1e" exitCode=0 Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.922367 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g7rkn" event={"ID":"fcb06827-ed6b-45ff-b0f8-fc8103058d6f","Type":"ContainerDied","Data":"8ca4bf11803e59299e69c4b526175e94c1505ef7c1e7e5abc7187ee2253cfb1e"} Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.922765 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g7rkn" event={"ID":"fcb06827-ed6b-45ff-b0f8-fc8103058d6f","Type":"ContainerDied","Data":"b0c2b7a68a514544e615c61d5a543cdf199ced26bdec52ac757e1ee5e33a8a4d"} Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.922782 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c2b7a68a514544e615c61d5a543cdf199ced26bdec52ac757e1ee5e33a8a4d" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.960657 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.988780 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7ftk\" (UniqueName: \"kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk\") pod \"fcb06827-ed6b-45ff-b0f8-fc8103058d6f\" (UID: \"fcb06827-ed6b-45ff-b0f8-fc8103058d6f\") " Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:00.997862 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk" (OuterVolumeSpecName: "kube-api-access-r7ftk") pod "fcb06827-ed6b-45ff-b0f8-fc8103058d6f" (UID: "fcb06827-ed6b-45ff-b0f8-fc8103058d6f"). InnerVolumeSpecName "kube-api-access-r7ftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.036504 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-f4pcr" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.070326 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.092201 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7ftk\" (UniqueName: \"kubernetes.io/projected/fcb06827-ed6b-45ff-b0f8-fc8103058d6f-kube-api-access-r7ftk\") on node \"crc\" DevicePath \"\"" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.936459 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g7rkn" Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.968290 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:33:01 crc kubenswrapper[4741]: I0226 08:33:01.975561 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-g7rkn"] Feb 26 08:33:03 crc kubenswrapper[4741]: I0226 08:33:03.809376 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" path="/var/lib/kubelet/pods/fcb06827-ed6b-45ff-b0f8-fc8103058d6f/volumes" Feb 26 08:33:09 crc kubenswrapper[4741]: I0226 08:33:09.580800 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:33:09 crc kubenswrapper[4741]: I0226 08:33:09.581399 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:33:09 crc kubenswrapper[4741]: I0226 08:33:09.646559 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:33:10 crc kubenswrapper[4741]: I0226 08:33:10.161545 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-thrfc" Feb 26 08:33:11 crc kubenswrapper[4741]: I0226 08:33:11.871908 4741 scope.go:117] "RemoveContainer" containerID="ad49ad0cc0c2e49ab21c6e6e93285f3c64857d43e31f0c825cc5649135553f43" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.537713 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f"] Feb 26 08:33:17 crc kubenswrapper[4741]: E0226 08:33:17.538873 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" containerName="registry-server" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.538888 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" containerName="registry-server" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.539048 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb06827-ed6b-45ff-b0f8-fc8103058d6f" containerName="registry-server" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.540442 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.542560 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-5zmzh" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.558535 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f"] Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.723706 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lm5\" (UniqueName: \"kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.723776 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.723802 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.826441 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.826526 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.826966 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lm5\" (UniqueName: \"kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.827503 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.827524 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:17 crc kubenswrapper[4741]: I0226 08:33:17.861736 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lm5\" (UniqueName: \"kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5\") pod \"b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:18 crc kubenswrapper[4741]: I0226 08:33:18.159387 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:18 crc kubenswrapper[4741]: I0226 08:33:18.630359 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f"] Feb 26 08:33:19 crc kubenswrapper[4741]: I0226 08:33:19.205153 4741 generic.go:334] "Generic (PLEG): container finished" podID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerID="8494336eee04c7ae630436c4cc088140e43401f6061390117b610d71330b3c59" exitCode=0 Feb 26 08:33:19 crc kubenswrapper[4741]: I0226 08:33:19.205414 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" event={"ID":"9e45a379-1f39-4497-a1c9-cde834f3dfcc","Type":"ContainerDied","Data":"8494336eee04c7ae630436c4cc088140e43401f6061390117b610d71330b3c59"} Feb 26 08:33:19 crc kubenswrapper[4741]: I0226 08:33:19.206250 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" event={"ID":"9e45a379-1f39-4497-a1c9-cde834f3dfcc","Type":"ContainerStarted","Data":"885ffe189990df66d87007f2a39ef264f6eca0802847ae18d7169159069f87ad"} Feb 26 08:33:20 crc kubenswrapper[4741]: I0226 08:33:20.219774 4741 generic.go:334] "Generic (PLEG): container finished" podID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerID="e8972f1c168825e9810a3f5d8d0c1790e69ee4407551a0230b49c2cdb0ca0887" exitCode=0 Feb 26 08:33:20 crc kubenswrapper[4741]: I0226 08:33:20.219867 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" event={"ID":"9e45a379-1f39-4497-a1c9-cde834f3dfcc","Type":"ContainerDied","Data":"e8972f1c168825e9810a3f5d8d0c1790e69ee4407551a0230b49c2cdb0ca0887"} Feb 26 08:33:21 crc kubenswrapper[4741]: I0226 08:33:21.234274 4741 generic.go:334] "Generic (PLEG): container finished" podID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerID="885afb802bfd3d9299c566be5168f2b4f798c1c57fcba3d2e387ddc6ef65320b" exitCode=0 Feb 26 08:33:21 crc kubenswrapper[4741]: I0226 08:33:21.234376 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" event={"ID":"9e45a379-1f39-4497-a1c9-cde834f3dfcc","Type":"ContainerDied","Data":"885afb802bfd3d9299c566be5168f2b4f798c1c57fcba3d2e387ddc6ef65320b"} Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.696991 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.839914 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle\") pod \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.839970 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util\") pod \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.840276 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8lm5\" (UniqueName: \"kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5\") pod \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\" (UID: \"9e45a379-1f39-4497-a1c9-cde834f3dfcc\") " Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.840807 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle" (OuterVolumeSpecName: "bundle") pod "9e45a379-1f39-4497-a1c9-cde834f3dfcc" (UID: "9e45a379-1f39-4497-a1c9-cde834f3dfcc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.852242 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5" (OuterVolumeSpecName: "kube-api-access-q8lm5") pod "9e45a379-1f39-4497-a1c9-cde834f3dfcc" (UID: "9e45a379-1f39-4497-a1c9-cde834f3dfcc"). InnerVolumeSpecName "kube-api-access-q8lm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.857012 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util" (OuterVolumeSpecName: "util") pod "9e45a379-1f39-4497-a1c9-cde834f3dfcc" (UID: "9e45a379-1f39-4497-a1c9-cde834f3dfcc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.942763 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8lm5\" (UniqueName: \"kubernetes.io/projected/9e45a379-1f39-4497-a1c9-cde834f3dfcc-kube-api-access-q8lm5\") on node \"crc\" DevicePath \"\"" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.942795 4741 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:33:22 crc kubenswrapper[4741]: I0226 08:33:22.942806 4741 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e45a379-1f39-4497-a1c9-cde834f3dfcc-util\") on node \"crc\" DevicePath \"\"" Feb 26 08:33:23 crc kubenswrapper[4741]: I0226 08:33:23.266649 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" event={"ID":"9e45a379-1f39-4497-a1c9-cde834f3dfcc","Type":"ContainerDied","Data":"885ffe189990df66d87007f2a39ef264f6eca0802847ae18d7169159069f87ad"} Feb 26 08:33:23 crc kubenswrapper[4741]: I0226 08:33:23.266733 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885ffe189990df66d87007f2a39ef264f6eca0802847ae18d7169159069f87ad" Feb 26 08:33:23 crc kubenswrapper[4741]: I0226 08:33:23.266903 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f" Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.149513 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.150098 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.150269 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.151543 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.151635 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a" gracePeriod=600 Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.308165 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a"} Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.308272 4741 scope.go:117] "RemoveContainer" containerID="68abf4356f81aabd267885bc4a138705d4a3fc790f51e9e7362b1f352ff25cfd" Feb 26 08:33:25 crc kubenswrapper[4741]: I0226 08:33:25.308174 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a" exitCode=0 Feb 26 08:33:26 crc kubenswrapper[4741]: I0226 08:33:26.322632 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8"} Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.097133 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-76fc895699-z8llq"] Feb 26 08:33:31 crc kubenswrapper[4741]: E0226 08:33:31.098425 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="pull" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.098448 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="pull" Feb 26 08:33:31 crc kubenswrapper[4741]: E0226 08:33:31.098463 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="util" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.098471 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="util" Feb 26 08:33:31 crc kubenswrapper[4741]: E0226 08:33:31.098502 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="extract" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.098512 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="extract" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.098731 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e45a379-1f39-4497-a1c9-cde834f3dfcc" containerName="extract" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.099661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:31 crc kubenswrapper[4741]: I0226 08:33:31.102915 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-hrvcp" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:31.174128 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84ph\" (UniqueName: \"kubernetes.io/projected/4b189628-5343-4512-bf5d-1daf4abf4079-kube-api-access-x84ph\") pod \"openstack-operator-controller-init-76fc895699-z8llq\" (UID: \"4b189628-5343-4512-bf5d-1daf4abf4079\") " pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:31.275215 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84ph\" (UniqueName: \"kubernetes.io/projected/4b189628-5343-4512-bf5d-1daf4abf4079-kube-api-access-x84ph\") pod \"openstack-operator-controller-init-76fc895699-z8llq\" (UID: \"4b189628-5343-4512-bf5d-1daf4abf4079\") " pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:31.298944 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84ph\" (UniqueName: \"kubernetes.io/projected/4b189628-5343-4512-bf5d-1daf4abf4079-kube-api-access-x84ph\") pod \"openstack-operator-controller-init-76fc895699-z8llq\" (UID: \"4b189628-5343-4512-bf5d-1daf4abf4079\") " pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:31.421617 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.335549 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.335572 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.335624 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.335645 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.348690 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.348762 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.372878 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.372943 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:33:32 crc kubenswrapper[4741]: I0226 08:33:32.823580 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76fc895699-z8llq"] Feb 26 08:33:33 crc kubenswrapper[4741]: I0226 08:33:33.147952 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76fc895699-z8llq"] Feb 26 08:33:33 crc kubenswrapper[4741]: I0226 08:33:33.425409 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" event={"ID":"4b189628-5343-4512-bf5d-1daf4abf4079","Type":"ContainerStarted","Data":"58a695904ec4b9c1d482df37aa6dce7cedffe6bfc292408476870680a496b809"} Feb 26 08:33:41 crc kubenswrapper[4741]: I0226 08:33:41.715971 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" event={"ID":"4b189628-5343-4512-bf5d-1daf4abf4079","Type":"ContainerStarted","Data":"2e7bc6fda9787884e948aaf981f36e3d7402ede08b8c08e8f0196890fa3cc9f0"} Feb 26 08:33:41 crc kubenswrapper[4741]: I0226 08:33:41.717091 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:33:41 crc kubenswrapper[4741]: I0226 08:33:41.754462 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" podStartSLOduration=3.122744414 podStartE2EDuration="10.754434736s" podCreationTimestamp="2026-02-26 08:33:31 +0000 UTC" firstStartedPulling="2026-02-26 08:33:33.169222063 +0000 UTC m=+1248.165159450" lastFinishedPulling="2026-02-26 08:33:40.800912365 +0000 UTC m=+1255.796849772" observedRunningTime="2026-02-26 08:33:41.745611687 +0000 UTC m=+1256.741549074" watchObservedRunningTime="2026-02-26 08:33:41.754434736 +0000 UTC m=+1256.750372133" Feb 26 08:33:51 crc kubenswrapper[4741]: I0226 08:33:51.425400 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.166100 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534914-q2xf7"] Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.167910 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.171510 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.171524 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.171645 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.177904 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534914-q2xf7"] Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.191389 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czc67\" (UniqueName: \"kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67\") pod \"auto-csr-approver-29534914-q2xf7\" (UID: \"92daa308-ea4d-4e9d-aa8a-59f011190b00\") " pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.293878 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czc67\" (UniqueName: \"kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67\") pod \"auto-csr-approver-29534914-q2xf7\" (UID: \"92daa308-ea4d-4e9d-aa8a-59f011190b00\") " pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.319946 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czc67\" (UniqueName: \"kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67\") pod \"auto-csr-approver-29534914-q2xf7\" (UID: \"92daa308-ea4d-4e9d-aa8a-59f011190b00\") " pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:00 crc kubenswrapper[4741]: I0226 08:34:00.684158 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:01 crc kubenswrapper[4741]: I0226 08:34:01.537889 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534914-q2xf7"] Feb 26 08:34:01 crc kubenswrapper[4741]: I0226 08:34:01.903891 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" event={"ID":"92daa308-ea4d-4e9d-aa8a-59f011190b00","Type":"ContainerStarted","Data":"0c59b5270a074eb0af1995bff8baabc44c3c6eac455ae66c3fd71a53ee6d5e46"} Feb 26 08:34:09 crc kubenswrapper[4741]: I0226 08:34:09.558574 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" event={"ID":"92daa308-ea4d-4e9d-aa8a-59f011190b00","Type":"ContainerStarted","Data":"405cf90d586a6a354f0011a8eb0de003afbe8f4e2792a05e389abae5575037aa"} Feb 26 08:34:10 crc kubenswrapper[4741]: I0226 08:34:10.567877 4741 generic.go:334] "Generic (PLEG): container finished" podID="92daa308-ea4d-4e9d-aa8a-59f011190b00" containerID="405cf90d586a6a354f0011a8eb0de003afbe8f4e2792a05e389abae5575037aa" exitCode=0 Feb 26 08:34:10 crc kubenswrapper[4741]: I0226 08:34:10.568343 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" event={"ID":"92daa308-ea4d-4e9d-aa8a-59f011190b00","Type":"ContainerDied","Data":"405cf90d586a6a354f0011a8eb0de003afbe8f4e2792a05e389abae5575037aa"} Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.047950 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.189883 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czc67\" (UniqueName: \"kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67\") pod \"92daa308-ea4d-4e9d-aa8a-59f011190b00\" (UID: \"92daa308-ea4d-4e9d-aa8a-59f011190b00\") " Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.196548 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67" (OuterVolumeSpecName: "kube-api-access-czc67") pod "92daa308-ea4d-4e9d-aa8a-59f011190b00" (UID: "92daa308-ea4d-4e9d-aa8a-59f011190b00"). InnerVolumeSpecName "kube-api-access-czc67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.293159 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czc67\" (UniqueName: \"kubernetes.io/projected/92daa308-ea4d-4e9d-aa8a-59f011190b00-kube-api-access-czc67\") on node \"crc\" DevicePath \"\"" Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.586948 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" event={"ID":"92daa308-ea4d-4e9d-aa8a-59f011190b00","Type":"ContainerDied","Data":"0c59b5270a074eb0af1995bff8baabc44c3c6eac455ae66c3fd71a53ee6d5e46"} Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.587008 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c59b5270a074eb0af1995bff8baabc44c3c6eac455ae66c3fd71a53ee6d5e46" Feb 26 08:34:12 crc kubenswrapper[4741]: I0226 08:34:12.587089 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534914-q2xf7" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.118820 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534908-4pvhz"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.132508 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534908-4pvhz"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.609422 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rt588"] Feb 26 08:34:13 crc kubenswrapper[4741]: E0226 08:34:13.609869 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92daa308-ea4d-4e9d-aa8a-59f011190b00" containerName="oc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.609890 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="92daa308-ea4d-4e9d-aa8a-59f011190b00" containerName="oc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.610044 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="92daa308-ea4d-4e9d-aa8a-59f011190b00" containerName="oc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.610762 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.615362 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-f29rx" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.626382 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rt588"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.635793 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.638154 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.646448 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vqk6v" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.653375 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.675508 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.677609 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.682603 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bkxz7" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.704240 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.706728 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.711424 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6v25b" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.724270 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.726431 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h24zf\" (UniqueName: \"kubernetes.io/projected/f4754cdd-d402-4c7e-a0cf-a39549369eb8-kube-api-access-h24zf\") pod \"barbican-operator-controller-manager-868647ff47-rt588\" (UID: \"f4754cdd-d402-4c7e-a0cf-a39549369eb8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.726760 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnrh\" (UniqueName: \"kubernetes.io/projected/b2c3a19d-a170-476f-a589-e7cde492ac1d-kube-api-access-2mnrh\") pod \"cinder-operator-controller-manager-55d77d7b5c-s78b5\" (UID: \"b2c3a19d-a170-476f-a589-e7cde492ac1d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.737618 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.750238 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.752431 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.761522 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.763167 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.764543 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-q779t" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.777069 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j9nlx" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.830733 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nc9r\" (UniqueName: \"kubernetes.io/projected/6e5158cf-c5d8-46e4-b433-20c6a410bf5e-kube-api-access-4nc9r\") pod \"glance-operator-controller-manager-784b5bb6c5-wdfht\" (UID: \"6e5158cf-c5d8-46e4-b433-20c6a410bf5e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.831078 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h24zf\" (UniqueName: \"kubernetes.io/projected/f4754cdd-d402-4c7e-a0cf-a39549369eb8-kube-api-access-h24zf\") pod \"barbican-operator-controller-manager-868647ff47-rt588\" (UID: \"f4754cdd-d402-4c7e-a0cf-a39549369eb8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.831356 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhlf5\" (UniqueName: \"kubernetes.io/projected/7d9bffe2-0600-47fe-83e6-847d6943a748-kube-api-access-rhlf5\") pod \"designate-operator-controller-manager-6d8bf5c495-6bfw4\" (UID: \"7d9bffe2-0600-47fe-83e6-847d6943a748\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.831517 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnrh\" (UniqueName: \"kubernetes.io/projected/b2c3a19d-a170-476f-a589-e7cde492ac1d-kube-api-access-2mnrh\") pod \"cinder-operator-controller-manager-55d77d7b5c-s78b5\" (UID: \"b2c3a19d-a170-476f-a589-e7cde492ac1d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.843020 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0528dad-89da-4e89-a788-3d5294df861e" path="/var/lib/kubelet/pods/d0528dad-89da-4e89-a788-3d5294df861e/volumes" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.843859 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.843898 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.843910 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2lglc"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.845079 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2lglc"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.845217 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.847524 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.849327 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.853039 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9wkdp" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.853258 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.853312 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2l42r" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.865476 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h24zf\" (UniqueName: \"kubernetes.io/projected/f4754cdd-d402-4c7e-a0cf-a39549369eb8-kube-api-access-h24zf\") pod \"barbican-operator-controller-manager-868647ff47-rt588\" (UID: \"f4754cdd-d402-4c7e-a0cf-a39549369eb8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.881342 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.882762 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.882963 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnrh\" (UniqueName: \"kubernetes.io/projected/b2c3a19d-a170-476f-a589-e7cde492ac1d-kube-api-access-2mnrh\") pod \"cinder-operator-controller-manager-55d77d7b5c-s78b5\" (UID: \"b2c3a19d-a170-476f-a589-e7cde492ac1d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.887139 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k7nkc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.896545 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.905658 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.907456 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.917671 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lwjvc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.924084 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.926351 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.933755 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.934601 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhlf5\" (UniqueName: \"kubernetes.io/projected/7d9bffe2-0600-47fe-83e6-847d6943a748-kube-api-access-rhlf5\") pod \"designate-operator-controller-manager-6d8bf5c495-6bfw4\" (UID: \"7d9bffe2-0600-47fe-83e6-847d6943a748\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.934738 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7g2t\" (UniqueName: \"kubernetes.io/projected/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-kube-api-access-v7g2t\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.934911 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsms4\" (UniqueName: \"kubernetes.io/projected/e3fc347b-349b-4811-8f1e-0281658e669a-kube-api-access-hsms4\") pod \"horizon-operator-controller-manager-5b9b8895d5-9b4f4\" (UID: \"e3fc347b-349b-4811-8f1e-0281658e669a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.935059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.935195 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nc9r\" (UniqueName: \"kubernetes.io/projected/6e5158cf-c5d8-46e4-b433-20c6a410bf5e-kube-api-access-4nc9r\") pod \"glance-operator-controller-manager-784b5bb6c5-wdfht\" (UID: \"6e5158cf-c5d8-46e4-b433-20c6a410bf5e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.935310 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvm8\" (UniqueName: \"kubernetes.io/projected/aafef34e-4723-41d4-a28e-634f4ba80bea-kube-api-access-dwvm8\") pod \"heat-operator-controller-manager-69f49c598c-mkmsh\" (UID: \"aafef34e-4723-41d4-a28e-634f4ba80bea\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.955401 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj"] Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.956757 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nb5rf" Feb 26 08:34:13 crc kubenswrapper[4741]: I0226 08:34:13.972738 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.049043 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nc9r\" (UniqueName: \"kubernetes.io/projected/6e5158cf-c5d8-46e4-b433-20c6a410bf5e-kube-api-access-4nc9r\") pod \"glance-operator-controller-manager-784b5bb6c5-wdfht\" (UID: \"6e5158cf-c5d8-46e4-b433-20c6a410bf5e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056278 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsr5\" (UniqueName: \"kubernetes.io/projected/e97b1690-b880-4c0d-9e36-484d2abf0e8e-kube-api-access-cqsr5\") pod \"keystone-operator-controller-manager-b4d948c87-d672t\" (UID: \"e97b1690-b880-4c0d-9e36-484d2abf0e8e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056413 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsms4\" (UniqueName: \"kubernetes.io/projected/e3fc347b-349b-4811-8f1e-0281658e669a-kube-api-access-hsms4\") pod \"horizon-operator-controller-manager-5b9b8895d5-9b4f4\" (UID: \"e3fc347b-349b-4811-8f1e-0281658e669a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056480 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056562 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvm8\" (UniqueName: \"kubernetes.io/projected/aafef34e-4723-41d4-a28e-634f4ba80bea-kube-api-access-dwvm8\") pod \"heat-operator-controller-manager-69f49c598c-mkmsh\" (UID: \"aafef34e-4723-41d4-a28e-634f4ba80bea\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056593 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k78r\" (UniqueName: \"kubernetes.io/projected/0d69cf5a-6ccc-4c66-a767-fd837ea440a3-kube-api-access-9k78r\") pod \"mariadb-operator-controller-manager-6994f66f48-d7flk\" (UID: \"0d69cf5a-6ccc-4c66-a767-fd837ea440a3\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056846 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7g2t\" (UniqueName: \"kubernetes.io/projected/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-kube-api-access-v7g2t\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056921 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kr44\" (UniqueName: \"kubernetes.io/projected/76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe-kube-api-access-2kr44\") pod \"ironic-operator-controller-manager-554564d7fc-5tj5s\" (UID: \"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.056992 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492bs\" (UniqueName: \"kubernetes.io/projected/ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6-kube-api-access-492bs\") pod \"manila-operator-controller-manager-67d996989d-b4tjj\" (UID: \"ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.057908 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.057996 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:14.557971935 +0000 UTC m=+1289.553909322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.062648 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhlf5\" (UniqueName: \"kubernetes.io/projected/7d9bffe2-0600-47fe-83e6-847d6943a748-kube-api-access-rhlf5\") pod \"designate-operator-controller-manager-6d8bf5c495-6bfw4\" (UID: \"7d9bffe2-0600-47fe-83e6-847d6943a748\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.075277 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.095675 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvm8\" (UniqueName: \"kubernetes.io/projected/aafef34e-4723-41d4-a28e-634f4ba80bea-kube-api-access-dwvm8\") pod \"heat-operator-controller-manager-69f49c598c-mkmsh\" (UID: \"aafef34e-4723-41d4-a28e-634f4ba80bea\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.099881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7g2t\" (UniqueName: \"kubernetes.io/projected/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-kube-api-access-v7g2t\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.144988 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsms4\" (UniqueName: \"kubernetes.io/projected/e3fc347b-349b-4811-8f1e-0281658e669a-kube-api-access-hsms4\") pod \"horizon-operator-controller-manager-5b9b8895d5-9b4f4\" (UID: \"e3fc347b-349b-4811-8f1e-0281658e669a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.186832 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.189083 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kr44\" (UniqueName: \"kubernetes.io/projected/76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe-kube-api-access-2kr44\") pod \"ironic-operator-controller-manager-554564d7fc-5tj5s\" (UID: \"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.189168 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492bs\" (UniqueName: \"kubernetes.io/projected/ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6-kube-api-access-492bs\") pod \"manila-operator-controller-manager-67d996989d-b4tjj\" (UID: \"ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.189199 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsr5\" (UniqueName: \"kubernetes.io/projected/e97b1690-b880-4c0d-9e36-484d2abf0e8e-kube-api-access-cqsr5\") pod \"keystone-operator-controller-manager-b4d948c87-d672t\" (UID: \"e97b1690-b880-4c0d-9e36-484d2abf0e8e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.189283 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k78r\" (UniqueName: \"kubernetes.io/projected/0d69cf5a-6ccc-4c66-a767-fd837ea440a3-kube-api-access-9k78r\") pod \"mariadb-operator-controller-manager-6994f66f48-d7flk\" (UID: \"0d69cf5a-6ccc-4c66-a767-fd837ea440a3\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.196339 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.205500 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.229911 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8ldf8" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.241700 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.269013 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.272515 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.292845 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpr4\" (UniqueName: \"kubernetes.io/projected/3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb-kube-api-access-9jpr4\") pod \"neutron-operator-controller-manager-6bd4687957-tc4z9\" (UID: \"3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.293901 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6sxrk" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.298513 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.319699 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.331908 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.332439 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.348876 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.351343 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.362598 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w8pxl" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.397307 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.398949 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jpr4\" (UniqueName: \"kubernetes.io/projected/3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb-kube-api-access-9jpr4\") pod \"neutron-operator-controller-manager-6bd4687957-tc4z9\" (UID: \"3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.399027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77p4\" (UniqueName: \"kubernetes.io/projected/c40047b0-d115-4a5f-aa50-d888eafff094-kube-api-access-s77p4\") pod \"nova-operator-controller-manager-567668f5cf-k2c7v\" (UID: \"c40047b0-d115-4a5f-aa50-d888eafff094\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.416961 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492bs\" (UniqueName: \"kubernetes.io/projected/ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6-kube-api-access-492bs\") pod \"manila-operator-controller-manager-67d996989d-b4tjj\" (UID: \"ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.419000 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kr44\" (UniqueName: \"kubernetes.io/projected/76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe-kube-api-access-2kr44\") pod \"ironic-operator-controller-manager-554564d7fc-5tj5s\" (UID: \"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.435752 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.443757 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k78r\" (UniqueName: \"kubernetes.io/projected/0d69cf5a-6ccc-4c66-a767-fd837ea440a3-kube-api-access-9k78r\") pod \"mariadb-operator-controller-manager-6994f66f48-d7flk\" (UID: \"0d69cf5a-6ccc-4c66-a767-fd837ea440a3\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.444481 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsr5\" (UniqueName: \"kubernetes.io/projected/e97b1690-b880-4c0d-9e36-484d2abf0e8e-kube-api-access-cqsr5\") pod \"keystone-operator-controller-manager-b4d948c87-d672t\" (UID: \"e97b1690-b880-4c0d-9e36-484d2abf0e8e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.463138 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.470724 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jpr4\" (UniqueName: \"kubernetes.io/projected/3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb-kube-api-access-9jpr4\") pod \"neutron-operator-controller-manager-6bd4687957-tc4z9\" (UID: \"3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.485205 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.487033 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.503679 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ctk\" (UniqueName: \"kubernetes.io/projected/6980cc82-375e-4057-8dd6-1518d19891ed-kube-api-access-s4ctk\") pod \"octavia-operator-controller-manager-659dc6bbfc-z8h9r\" (UID: \"6980cc82-375e-4057-8dd6-1518d19891ed\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.503912 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77p4\" (UniqueName: \"kubernetes.io/projected/c40047b0-d115-4a5f-aa50-d888eafff094-kube-api-access-s77p4\") pod \"nova-operator-controller-manager-567668f5cf-k2c7v\" (UID: \"c40047b0-d115-4a5f-aa50-d888eafff094\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.510898 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mqwbn" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.594202 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.598448 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77p4\" (UniqueName: \"kubernetes.io/projected/c40047b0-d115-4a5f-aa50-d888eafff094-kube-api-access-s77p4\") pod \"nova-operator-controller-manager-567668f5cf-k2c7v\" (UID: \"c40047b0-d115-4a5f-aa50-d888eafff094\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.607530 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5k9b\" (UniqueName: \"kubernetes.io/projected/10293970-cf7e-4d61-9522-0bbfaa7a872f-kube-api-access-q5k9b\") pod \"ovn-operator-controller-manager-5955d8c787-zhvxr\" (UID: \"10293970-cf7e-4d61-9522-0bbfaa7a872f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.607617 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.607713 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ctk\" (UniqueName: \"kubernetes.io/projected/6980cc82-375e-4057-8dd6-1518d19891ed-kube-api-access-s4ctk\") pod \"octavia-operator-controller-manager-659dc6bbfc-z8h9r\" (UID: \"6980cc82-375e-4057-8dd6-1518d19891ed\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.608197 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.608249 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:15.608227796 +0000 UTC m=+1290.604165173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.636601 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.638080 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.644633 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9zdtb" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.645945 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ctk\" (UniqueName: \"kubernetes.io/projected/6980cc82-375e-4057-8dd6-1518d19891ed-kube-api-access-s4ctk\") pod \"octavia-operator-controller-manager-659dc6bbfc-z8h9r\" (UID: \"6980cc82-375e-4057-8dd6-1518d19891ed\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.703202 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.705174 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.706656 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.707309 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.723787 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wkn8l" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.724000 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.727161 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.729613 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5k9b\" (UniqueName: \"kubernetes.io/projected/10293970-cf7e-4d61-9522-0bbfaa7a872f-kube-api-access-q5k9b\") pod \"ovn-operator-controller-manager-5955d8c787-zhvxr\" (UID: \"10293970-cf7e-4d61-9522-0bbfaa7a872f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.730253 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.730719 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.732918 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.779323 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5k9b\" (UniqueName: \"kubernetes.io/projected/10293970-cf7e-4d61-9522-0bbfaa7a872f-kube-api-access-q5k9b\") pod \"ovn-operator-controller-manager-5955d8c787-zhvxr\" (UID: \"10293970-cf7e-4d61-9522-0bbfaa7a872f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.815161 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.831322 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.831367 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjbw\" (UniqueName: \"kubernetes.io/projected/c9c57ac4-4382-4a2a-b0c7-8985f71ea615-kube-api-access-5vjbw\") pod \"placement-operator-controller-manager-8497b45c89-x77f8\" (UID: \"c9c57ac4-4382-4a2a-b0c7-8985f71ea615\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.831461 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqbp\" (UniqueName: \"kubernetes.io/projected/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-kube-api-access-lfqbp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.841873 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.842067 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.885287 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz"] Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.886735 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.893196 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tnswh" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.934479 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.934533 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjbw\" (UniqueName: \"kubernetes.io/projected/c9c57ac4-4382-4a2a-b0c7-8985f71ea615-kube-api-access-5vjbw\") pod \"placement-operator-controller-manager-8497b45c89-x77f8\" (UID: \"c9c57ac4-4382-4a2a-b0c7-8985f71ea615\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:34:14 crc kubenswrapper[4741]: I0226 08:34:14.934641 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfqbp\" (UniqueName: \"kubernetes.io/projected/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-kube-api-access-lfqbp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.938648 4741 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:14 crc kubenswrapper[4741]: E0226 08:34:14.938707 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert podName:80b43fed-c72c-4b2b-8d4d-0a0b9044d61f nodeName:}" failed. No retries permitted until 2026-02-26 08:34:15.438688992 +0000 UTC m=+1290.434626379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" (UID: "80b43fed-c72c-4b2b-8d4d-0a0b9044d61f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.001557 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfqbp\" (UniqueName: \"kubernetes.io/projected/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-kube-api-access-lfqbp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.021299 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.031770 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjbw\" (UniqueName: \"kubernetes.io/projected/c9c57ac4-4382-4a2a-b0c7-8985f71ea615-kube-api-access-5vjbw\") pod \"placement-operator-controller-manager-8497b45c89-x77f8\" (UID: \"c9c57ac4-4382-4a2a-b0c7-8985f71ea615\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.032468 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.034349 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.034434 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.036537 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5khjn" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.041746 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnt5\" (UniqueName: \"kubernetes.io/projected/dbdb4143-6ca6-4468-ae59-db0a15ae9229-kube-api-access-7hnt5\") pod \"swift-operator-controller-manager-68f46476f-7c7nz\" (UID: \"dbdb4143-6ca6-4468-ae59-db0a15ae9229\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.046288 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.048095 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.054563 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lsxjv" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.060542 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.114324 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.148613 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.159521 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2rv\" (UniqueName: \"kubernetes.io/projected/001f4723-6a83-41ae-ac81-fc17c370a90e-kube-api-access-tx2rv\") pod \"telemetry-operator-controller-manager-5854c6b474-xr2dz\" (UID: \"001f4723-6a83-41ae-ac81-fc17c370a90e\") " pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.159743 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pt9\" (UniqueName: \"kubernetes.io/projected/e569c05c-2b4a-448e-8393-65650cdc0d4a-kube-api-access-t7pt9\") pod \"test-operator-controller-manager-5dc6794d5b-zf778\" (UID: \"e569c05c-2b4a-448e-8393-65650cdc0d4a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.160458 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnt5\" (UniqueName: \"kubernetes.io/projected/dbdb4143-6ca6-4468-ae59-db0a15ae9229-kube-api-access-7hnt5\") pod \"swift-operator-controller-manager-68f46476f-7c7nz\" (UID: \"dbdb4143-6ca6-4468-ae59-db0a15ae9229\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.208239 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jrn2n" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.224703 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.253641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnt5\" (UniqueName: \"kubernetes.io/projected/dbdb4143-6ca6-4468-ae59-db0a15ae9229-kube-api-access-7hnt5\") pod \"swift-operator-controller-manager-68f46476f-7c7nz\" (UID: \"dbdb4143-6ca6-4468-ae59-db0a15ae9229\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.271442 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2rv\" (UniqueName: \"kubernetes.io/projected/001f4723-6a83-41ae-ac81-fc17c370a90e-kube-api-access-tx2rv\") pod \"telemetry-operator-controller-manager-5854c6b474-xr2dz\" (UID: \"001f4723-6a83-41ae-ac81-fc17c370a90e\") " pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.271510 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjjr\" (UniqueName: \"kubernetes.io/projected/3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed-kube-api-access-gbjjr\") pod \"watcher-operator-controller-manager-bccc79885-qmzqh\" (UID: \"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.271550 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7pt9\" (UniqueName: \"kubernetes.io/projected/e569c05c-2b4a-448e-8393-65650cdc0d4a-kube-api-access-t7pt9\") pod \"test-operator-controller-manager-5dc6794d5b-zf778\" (UID: \"e569c05c-2b4a-448e-8393-65650cdc0d4a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.317611 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2rv\" (UniqueName: \"kubernetes.io/projected/001f4723-6a83-41ae-ac81-fc17c370a90e-kube-api-access-tx2rv\") pod \"telemetry-operator-controller-manager-5854c6b474-xr2dz\" (UID: \"001f4723-6a83-41ae-ac81-fc17c370a90e\") " pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.325762 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7pt9\" (UniqueName: \"kubernetes.io/projected/e569c05c-2b4a-448e-8393-65650cdc0d4a-kube-api-access-t7pt9\") pod \"test-operator-controller-manager-5dc6794d5b-zf778\" (UID: \"e569c05c-2b4a-448e-8393-65650cdc0d4a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.337720 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.339246 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.354706 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.384006 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.391077 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjjr\" (UniqueName: \"kubernetes.io/projected/3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed-kube-api-access-gbjjr\") pod \"watcher-operator-controller-manager-bccc79885-qmzqh\" (UID: \"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.393561 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.395439 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.402787 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.403873 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.404012 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ljcdt" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.407597 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.422147 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjjr\" (UniqueName: \"kubernetes.io/projected/3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed-kube-api-access-gbjjr\") pod \"watcher-operator-controller-manager-bccc79885-qmzqh\" (UID: \"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.431747 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.433049 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.440989 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j7lhj" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.483343 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.493854 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.493965 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.493996 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.494018 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4vl\" (UniqueName: \"kubernetes.io/projected/e374c69c-1959-44c3-839c-2b5897259440-kube-api-access-dv4vl\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.495840 4741 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.495927 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert podName:80b43fed-c72c-4b2b-8d4d-0a0b9044d61f nodeName:}" failed. No retries permitted until 2026-02-26 08:34:16.49589843 +0000 UTC m=+1291.491835997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" (UID: "80b43fed-c72c-4b2b-8d4d-0a0b9044d61f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.568386 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.609237 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.609361 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.609411 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4vl\" (UniqueName: \"kubernetes.io/projected/e374c69c-1959-44c3-839c-2b5897259440-kube-api-access-dv4vl\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.609494 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqkw\" (UniqueName: \"kubernetes.io/projected/6c09faf7-6a12-4474-8251-2aa222e9c596-kube-api-access-mnqkw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rlrx\" (UID: \"6c09faf7-6a12-4474-8251-2aa222e9c596\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.609572 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609764 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609830 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:17.609808732 +0000 UTC m=+1292.605746109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609889 4741 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609915 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:16.109907924 +0000 UTC m=+1291.105845311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "metrics-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609958 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: E0226 08:34:15.609985 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:16.109976236 +0000 UTC m=+1291.105913623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.661738 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4vl\" (UniqueName: \"kubernetes.io/projected/e374c69c-1959-44c3-839c-2b5897259440-kube-api-access-dv4vl\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.679545 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" event={"ID":"b2c3a19d-a170-476f-a589-e7cde492ac1d","Type":"ContainerStarted","Data":"52661e9aba309eeebbc4ebdd77e432b4dc674a9e266c330fe1c7002dac743da1"} Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.709508 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.711375 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqkw\" (UniqueName: \"kubernetes.io/projected/6c09faf7-6a12-4474-8251-2aa222e9c596-kube-api-access-mnqkw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rlrx\" (UID: \"6c09faf7-6a12-4474-8251-2aa222e9c596\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.732306 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqkw\" (UniqueName: \"kubernetes.io/projected/6c09faf7-6a12-4474-8251-2aa222e9c596-kube-api-access-mnqkw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rlrx\" (UID: \"6c09faf7-6a12-4474-8251-2aa222e9c596\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.795684 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.813354 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rt588"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.897120 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4"] Feb 26 08:34:15 crc kubenswrapper[4741]: I0226 08:34:15.908719 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.136425 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.136790 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.136993 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.137066 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:17.137042685 +0000 UTC m=+1292.132980072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.137151 4741 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.137179 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:17.137172018 +0000 UTC m=+1292.133109405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "metrics-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.544693 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.544896 4741 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: E0226 08:34:16.544952 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert podName:80b43fed-c72c-4b2b-8d4d-0a0b9044d61f nodeName:}" failed. No retries permitted until 2026-02-26 08:34:18.544935644 +0000 UTC m=+1293.540873031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" (UID: "80b43fed-c72c-4b2b-8d4d-0a0b9044d61f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.646227 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.666269 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.678885 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.687597 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.696254 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.698339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" event={"ID":"aafef34e-4723-41d4-a28e-634f4ba80bea","Type":"ContainerStarted","Data":"c7592ec264c8bda2944bfea22d0d8b048606a8f250197a71a289fe3701373c40"} Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.702769 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" event={"ID":"7d9bffe2-0600-47fe-83e6-847d6943a748","Type":"ContainerStarted","Data":"ccac36e5a4c5c26519e3922aa4a4f463e9b137fa54ec20318c2f39b9e679aa5c"} Feb 26 08:34:16 crc kubenswrapper[4741]: W0226 08:34:16.708019 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3fc347b_349b_4811_8f1e_0281658e669a.slice/crio-1cecf7ea9e27830c0500783e557f6d824130e7bd1e980d618b94f65b53eab12a WatchSource:0}: Error finding container 1cecf7ea9e27830c0500783e557f6d824130e7bd1e980d618b94f65b53eab12a: Status 404 returned error can't find the container with id 1cecf7ea9e27830c0500783e557f6d824130e7bd1e980d618b94f65b53eab12a Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.727178 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9"] Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.753600 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" event={"ID":"f4754cdd-d402-4c7e-a0cf-a39549369eb8","Type":"ContainerStarted","Data":"b3c7e90c18ada58d135cda81b96d58eac587c271e29af76461b435ef1374f474"} Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.758687 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" event={"ID":"ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6","Type":"ContainerStarted","Data":"65e2a2cb4e5e37adf12d075c70447e2129857770a4f2f1f61f821333c774f35d"} Feb 26 08:34:16 crc kubenswrapper[4741]: I0226 08:34:16.827005 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht"] Feb 26 08:34:16 crc kubenswrapper[4741]: W0226 08:34:16.838375 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e5158cf_c5d8_46e4_b433_20c6a410bf5e.slice/crio-9a1c7ada512ee8962ec66c9bd91f0ec4b3a3c749016af45715c29128b9e3b971 WatchSource:0}: Error finding container 9a1c7ada512ee8962ec66c9bd91f0ec4b3a3c749016af45715c29128b9e3b971: Status 404 returned error can't find the container with id 9a1c7ada512ee8962ec66c9bd91f0ec4b3a3c749016af45715c29128b9e3b971 Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.164216 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.165148 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.164455 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.165347 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:19.165325713 +0000 UTC m=+1294.161263100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.165275 4741 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.165532 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:19.165502208 +0000 UTC m=+1294.161439605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "metrics-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.370186 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.399182 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.491264 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.550604 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.589029 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.623172 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk"] Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.647723 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hnt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-7c7nz_openstack-operators(dbdb4143-6ca6-4468-ae59-db0a15ae9229): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.649139 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8"] Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.649232 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.688420 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx"] Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.702142 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.702814 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz"] Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.703138 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.703224 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:21.703198236 +0000 UTC m=+1296.699135623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.774097 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" event={"ID":"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe","Type":"ContainerStarted","Data":"d4cec1a124efa33244617ac70dd0a332cfdedb2da20f5cd5e21589738d6253a6"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.777193 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" event={"ID":"e569c05c-2b4a-448e-8393-65650cdc0d4a","Type":"ContainerStarted","Data":"2376b6dd2feed89018993960ba37283c8732ba37b3995e1dc4cf399c7c1a3f96"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.779234 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" event={"ID":"e97b1690-b880-4c0d-9e36-484d2abf0e8e","Type":"ContainerStarted","Data":"84b64ca303bcc5f166c254eb8e380202fa30dec9e1deaba237aea37a24ca701e"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.781053 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" event={"ID":"6e5158cf-c5d8-46e4-b433-20c6a410bf5e","Type":"ContainerStarted","Data":"9a1c7ada512ee8962ec66c9bd91f0ec4b3a3c749016af45715c29128b9e3b971"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.784602 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" event={"ID":"dbdb4143-6ca6-4468-ae59-db0a15ae9229","Type":"ContainerStarted","Data":"35dc172e4de7061af88e3799e4542ceff886bb28d4520c9d92e13ad27f852327"} Feb 26 08:34:17 crc kubenswrapper[4741]: E0226 08:34:17.787659 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.806951 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" event={"ID":"001f4723-6a83-41ae-ac81-fc17c370a90e","Type":"ContainerStarted","Data":"6fb555dda57e633cdc75bbca2b53e82ee93d866ba18f7b35879d831a04901ee0"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.807007 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" event={"ID":"e3fc347b-349b-4811-8f1e-0281658e669a","Type":"ContainerStarted","Data":"1cecf7ea9e27830c0500783e557f6d824130e7bd1e980d618b94f65b53eab12a"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.807021 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" event={"ID":"c9c57ac4-4382-4a2a-b0c7-8985f71ea615","Type":"ContainerStarted","Data":"06513cce6ae691cfbb11a962a4dfea19118bf9579db5740eeb80b8f853a1f7ed"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.807033 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" event={"ID":"10293970-cf7e-4d61-9522-0bbfaa7a872f","Type":"ContainerStarted","Data":"5f038d21bf1e32e833afeafb6f9b03d93de9a6a490d1b3780ce44ac091a7de5e"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.819892 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" event={"ID":"c40047b0-d115-4a5f-aa50-d888eafff094","Type":"ContainerStarted","Data":"145e5c894dfdbddc6494a2dbb9b735d6a9419e361e4fafb493afb3fadb80b705"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.823339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" event={"ID":"0d69cf5a-6ccc-4c66-a767-fd837ea440a3","Type":"ContainerStarted","Data":"b9e58060efd7abf94a72c3efcd4ab65e82151d6f6cecc327ca982142566dbda7"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.825745 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" event={"ID":"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed","Type":"ContainerStarted","Data":"296cf220ac2826102a4f62079b221589ae7f11356e8a49c1d2b7eb429d272e05"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.826948 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" event={"ID":"6c09faf7-6a12-4474-8251-2aa222e9c596","Type":"ContainerStarted","Data":"123fb83b9107251aedfc3aa69762eef46e34f04790263fbb87d31b95bee4316f"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.828334 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" event={"ID":"3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb","Type":"ContainerStarted","Data":"28d679f637dec1d60ae9d53100b4f08ab14228aeaf3e7dc4a76adbc8a7b2a673"} Feb 26 08:34:17 crc kubenswrapper[4741]: I0226 08:34:17.829696 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" event={"ID":"6980cc82-375e-4057-8dd6-1518d19891ed","Type":"ContainerStarted","Data":"5b1e203960f69bd76858d12c7be4c8c0199bed0c2fde024a6ff3947c63d816dc"} Feb 26 08:34:18 crc kubenswrapper[4741]: I0226 08:34:18.624953 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:18 crc kubenswrapper[4741]: E0226 08:34:18.625187 4741 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:18 crc kubenswrapper[4741]: E0226 08:34:18.625567 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert podName:80b43fed-c72c-4b2b-8d4d-0a0b9044d61f nodeName:}" failed. No retries permitted until 2026-02-26 08:34:22.625541877 +0000 UTC m=+1297.621479264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" (UID: "80b43fed-c72c-4b2b-8d4d-0a0b9044d61f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:18 crc kubenswrapper[4741]: E0226 08:34:18.842050 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" Feb 26 08:34:19 crc kubenswrapper[4741]: I0226 08:34:19.238203 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:19 crc kubenswrapper[4741]: I0226 08:34:19.238359 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:19 crc kubenswrapper[4741]: E0226 08:34:19.238504 4741 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 08:34:19 crc kubenswrapper[4741]: E0226 08:34:19.238560 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:23.238541368 +0000 UTC m=+1298.234478755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "metrics-server-cert" not found Feb 26 08:34:19 crc kubenswrapper[4741]: E0226 08:34:19.238959 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:19 crc kubenswrapper[4741]: E0226 08:34:19.238991 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:23.23898349 +0000 UTC m=+1298.234920867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:21 crc kubenswrapper[4741]: I0226 08:34:21.801209 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:21 crc kubenswrapper[4741]: E0226 08:34:21.801663 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:21 crc kubenswrapper[4741]: E0226 08:34:21.801747 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:29.801722905 +0000 UTC m=+1304.797660292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:22 crc kubenswrapper[4741]: I0226 08:34:22.720401 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:22 crc kubenswrapper[4741]: E0226 08:34:22.720635 4741 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:22 crc kubenswrapper[4741]: E0226 08:34:22.721239 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert podName:80b43fed-c72c-4b2b-8d4d-0a0b9044d61f nodeName:}" failed. No retries permitted until 2026-02-26 08:34:30.721210877 +0000 UTC m=+1305.717148264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" (UID: "80b43fed-c72c-4b2b-8d4d-0a0b9044d61f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 08:34:23 crc kubenswrapper[4741]: I0226 08:34:23.333856 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:23 crc kubenswrapper[4741]: I0226 08:34:23.333961 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:23 crc kubenswrapper[4741]: E0226 08:34:23.334060 4741 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 08:34:23 crc kubenswrapper[4741]: E0226 08:34:23.334186 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:31.334161042 +0000 UTC m=+1306.330098429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "metrics-server-cert" not found Feb 26 08:34:23 crc kubenswrapper[4741]: E0226 08:34:23.334652 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:23 crc kubenswrapper[4741]: E0226 08:34:23.334902 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:31.334855802 +0000 UTC m=+1306.330793189 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:29 crc kubenswrapper[4741]: I0226 08:34:29.882781 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:29 crc kubenswrapper[4741]: E0226 08:34:29.883013 4741 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:29 crc kubenswrapper[4741]: E0226 08:34:29.883826 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert podName:8520f5ec-d0e0-4bc0-a10b-dfb5157c5924 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:45.88378508 +0000 UTC m=+1320.879722507 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert") pod "infra-operator-controller-manager-79d975b745-2lglc" (UID: "8520f5ec-d0e0-4bc0-a10b-dfb5157c5924") : secret "infra-operator-webhook-server-cert" not found Feb 26 08:34:30 crc kubenswrapper[4741]: I0226 08:34:30.809253 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:30 crc kubenswrapper[4741]: I0226 08:34:30.832543 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80b43fed-c72c-4b2b-8d4d-0a0b9044d61f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z\" (UID: \"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:30 crc kubenswrapper[4741]: I0226 08:34:30.989833 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:34:31 crc kubenswrapper[4741]: I0226 08:34:31.420812 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:31 crc kubenswrapper[4741]: I0226 08:34:31.421374 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:31 crc kubenswrapper[4741]: E0226 08:34:31.421589 4741 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 08:34:31 crc kubenswrapper[4741]: E0226 08:34:31.421710 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs podName:e374c69c-1959-44c3-839c-2b5897259440 nodeName:}" failed. No retries permitted until 2026-02-26 08:34:47.421678534 +0000 UTC m=+1322.417615921 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs") pod "openstack-operator-controller-manager-79d8d89fdf-5jkv5" (UID: "e374c69c-1959-44c3-839c-2b5897259440") : secret "webhook-server-cert" not found Feb 26 08:34:31 crc kubenswrapper[4741]: I0226 08:34:31.434569 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-metrics-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:32 crc kubenswrapper[4741]: E0226 08:34:32.033152 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:3e559f0a4f348bbaaf80e2da32282be28342e46e" Feb 26 08:34:32 crc kubenswrapper[4741]: E0226 08:34:32.033227 4741 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:3e559f0a4f348bbaaf80e2da32282be28342e46e" Feb 26 08:34:32 crc kubenswrapper[4741]: E0226 08:34:32.033405 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:3e559f0a4f348bbaaf80e2da32282be28342e46e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tx2rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5854c6b474-xr2dz_openstack-operators(001f4723-6a83-41ae-ac81-fc17c370a90e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:32 crc kubenswrapper[4741]: E0226 08:34:32.035150 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podUID="001f4723-6a83-41ae-ac81-fc17c370a90e" Feb 26 08:34:33 crc kubenswrapper[4741]: E0226 08:34:33.011020 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:3e559f0a4f348bbaaf80e2da32282be28342e46e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podUID="001f4723-6a83-41ae-ac81-fc17c370a90e" Feb 26 08:34:36 crc kubenswrapper[4741]: E0226 08:34:36.893896 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75: Get \"https://quay.io/v2/openstack-k8s-operators/octavia-operator/blobs/sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75\": context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 26 08:34:36 crc kubenswrapper[4741]: E0226 08:34:36.894185 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4ctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-z8h9r_openstack-operators(6980cc82-375e-4057-8dd6-1518d19891ed): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75: Get \"https://quay.io/v2/openstack-k8s-operators/octavia-operator/blobs/sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75\": context canceled" logger="UnhandledError" Feb 26 08:34:36 crc kubenswrapper[4741]: E0226 08:34:36.895411 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75: Get \\\"https://quay.io/v2/openstack-k8s-operators/octavia-operator/blobs/sha256:72d7a1c3d0e537bcc6eb87ea7a30a15d4325cd086bb0132461f57eee188d2f75\\\": context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" Feb 26 08:34:37 crc kubenswrapper[4741]: E0226 08:34:37.051872 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" Feb 26 08:34:37 crc kubenswrapper[4741]: E0226 08:34:37.660018 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 26 08:34:37 crc kubenswrapper[4741]: E0226 08:34:37.660690 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mnrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-s78b5_openstack-operators(b2c3a19d-a170-476f-a589-e7cde492ac1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:37 crc kubenswrapper[4741]: E0226 08:34:37.661973 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" Feb 26 08:34:38 crc kubenswrapper[4741]: E0226 08:34:38.063526 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" Feb 26 08:34:45 crc kubenswrapper[4741]: I0226 08:34:45.995409 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:46 crc kubenswrapper[4741]: I0226 08:34:46.028419 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8520f5ec-d0e0-4bc0-a10b-dfb5157c5924-cert\") pod \"infra-operator-controller-manager-79d975b745-2lglc\" (UID: \"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:46 crc kubenswrapper[4741]: I0226 08:34:46.140190 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9wkdp" Feb 26 08:34:46 crc kubenswrapper[4741]: I0226 08:34:46.147923 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:34:46 crc kubenswrapper[4741]: E0226 08:34:46.426441 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 26 08:34:46 crc kubenswrapper[4741]: E0226 08:34:46.426736 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5vjbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-x77f8_openstack-operators(c9c57ac4-4382-4a2a-b0c7-8985f71ea615): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:46 crc kubenswrapper[4741]: E0226 08:34:46.428358 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" Feb 26 08:34:46 crc kubenswrapper[4741]: E0226 08:34:46.995605 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" Feb 26 08:34:47 crc kubenswrapper[4741]: E0226 08:34:47.458548 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" Feb 26 08:34:47 crc kubenswrapper[4741]: E0226 08:34:47.458822 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nc9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-784b5bb6c5-wdfht_openstack-operators(6e5158cf-c5d8-46e4-b433-20c6a410bf5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:47 crc kubenswrapper[4741]: I0226 08:34:47.460664 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:47 crc kubenswrapper[4741]: E0226 08:34:47.460762 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" podUID="6e5158cf-c5d8-46e4-b433-20c6a410bf5e" Feb 26 08:34:47 crc kubenswrapper[4741]: I0226 08:34:47.471243 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e374c69c-1959-44c3-839c-2b5897259440-webhook-certs\") pod \"openstack-operator-controller-manager-79d8d89fdf-5jkv5\" (UID: \"e374c69c-1959-44c3-839c-2b5897259440\") " pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:47 crc kubenswrapper[4741]: I0226 08:34:47.565624 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ljcdt" Feb 26 08:34:47 crc kubenswrapper[4741]: I0226 08:34:47.572929 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:34:48 crc kubenswrapper[4741]: E0226 08:34:48.004433 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be\\\"\"" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" podUID="6e5158cf-c5d8-46e4-b433-20c6a410bf5e" Feb 26 08:34:49 crc kubenswrapper[4741]: E0226 08:34:49.444631 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 26 08:34:49 crc kubenswrapper[4741]: E0226 08:34:49.445618 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsms4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-9b4f4_openstack-operators(e3fc347b-349b-4811-8f1e-0281658e669a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:49 crc kubenswrapper[4741]: E0226 08:34:49.446796 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podUID="e3fc347b-349b-4811-8f1e-0281658e669a" Feb 26 08:34:50 crc kubenswrapper[4741]: E0226 08:34:50.024405 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podUID="e3fc347b-349b-4811-8f1e-0281658e669a" Feb 26 08:34:50 crc kubenswrapper[4741]: E0226 08:34:50.120158 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 26 08:34:50 crc kubenswrapper[4741]: E0226 08:34:50.120385 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhlf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-6bfw4_openstack-operators(7d9bffe2-0600-47fe-83e6-847d6943a748): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:50 crc kubenswrapper[4741]: E0226 08:34:50.122012 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podUID="7d9bffe2-0600-47fe-83e6-847d6943a748" Feb 26 08:34:51 crc kubenswrapper[4741]: E0226 08:34:51.034051 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podUID="7d9bffe2-0600-47fe-83e6-847d6943a748" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.095316 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.095584 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwvm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-mkmsh_openstack-operators(aafef34e-4723-41d4-a28e-634f4ba80bea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.097142 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podUID="aafef34e-4723-41d4-a28e-634f4ba80bea" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.763817 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.764094 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-492bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-b4tjj_openstack-operators(ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:53 crc kubenswrapper[4741]: E0226 08:34:53.765365 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podUID="ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6" Feb 26 08:34:54 crc kubenswrapper[4741]: E0226 08:34:54.081733 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podUID="ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6" Feb 26 08:34:54 crc kubenswrapper[4741]: E0226 08:34:54.082771 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podUID="aafef34e-4723-41d4-a28e-634f4ba80bea" Feb 26 08:34:54 crc kubenswrapper[4741]: E0226 08:34:54.649367 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 26 08:34:54 crc kubenswrapper[4741]: E0226 08:34:54.649614 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9k78r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-d7flk_openstack-operators(0d69cf5a-6ccc-4c66-a767-fd837ea440a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:54 crc kubenswrapper[4741]: E0226 08:34:54.651497 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podUID="0d69cf5a-6ccc-4c66-a767-fd837ea440a3" Feb 26 08:34:55 crc kubenswrapper[4741]: E0226 08:34:55.236215 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podUID="0d69cf5a-6ccc-4c66-a767-fd837ea440a3" Feb 26 08:34:55 crc kubenswrapper[4741]: E0226 08:34:55.701183 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 26 08:34:55 crc kubenswrapper[4741]: E0226 08:34:55.701966 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h24zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-rt588_openstack-operators(f4754cdd-d402-4c7e-a0cf-a39549369eb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:55 crc kubenswrapper[4741]: E0226 08:34:55.703238 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podUID="f4754cdd-d402-4c7e-a0cf-a39549369eb8" Feb 26 08:34:56 crc kubenswrapper[4741]: E0226 08:34:56.100296 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podUID="f4754cdd-d402-4c7e-a0cf-a39549369eb8" Feb 26 08:34:57 crc kubenswrapper[4741]: E0226 08:34:57.794628 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 08:34:57 crc kubenswrapper[4741]: E0226 08:34:57.795281 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqsr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-d672t_openstack-operators(e97b1690-b880-4c0d-9e36-484d2abf0e8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:57 crc kubenswrapper[4741]: E0226 08:34:57.796446 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" podUID="e97b1690-b880-4c0d-9e36-484d2abf0e8e" Feb 26 08:34:58 crc kubenswrapper[4741]: E0226 08:34:58.127158 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" podUID="e97b1690-b880-4c0d-9e36-484d2abf0e8e" Feb 26 08:34:59 crc kubenswrapper[4741]: E0226 08:34:59.028138 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 26 08:34:59 crc kubenswrapper[4741]: E0226 08:34:59.028431 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s77p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-k2c7v_openstack-operators(c40047b0-d115-4a5f-aa50-d888eafff094): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:34:59 crc kubenswrapper[4741]: E0226 08:34:59.029619 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" podUID="c40047b0-d115-4a5f-aa50-d888eafff094" Feb 26 08:34:59 crc kubenswrapper[4741]: E0226 08:34:59.131299 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" podUID="c40047b0-d115-4a5f-aa50-d888eafff094" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.366058 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.366809 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mnrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-s78b5_openstack-operators(b2c3a19d-a170-476f-a589-e7cde492ac1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.368000 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.434175 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.434917 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kr44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-5tj5s_openstack-operators(76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:08 crc kubenswrapper[4741]: E0226 08:35:08.436325 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" podUID="76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe" Feb 26 08:35:09 crc kubenswrapper[4741]: E0226 08:35:09.113217 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 26 08:35:09 crc kubenswrapper[4741]: E0226 08:35:09.113429 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jpr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-tc4z9_openstack-operators(3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:09 crc kubenswrapper[4741]: E0226 08:35:09.114920 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" podUID="3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb" Feb 26 08:35:09 crc kubenswrapper[4741]: E0226 08:35:09.238859 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" podUID="3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb" Feb 26 08:35:09 crc kubenswrapper[4741]: E0226 08:35:09.238860 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" podUID="76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe" Feb 26 08:35:10 crc kubenswrapper[4741]: E0226 08:35:10.018661 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 26 08:35:10 crc kubenswrapper[4741]: E0226 08:35:10.018977 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hnt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-7c7nz_openstack-operators(dbdb4143-6ca6-4468-ae59-db0a15ae9229): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:10 crc kubenswrapper[4741]: E0226 08:35:10.022276 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" Feb 26 08:35:11 crc kubenswrapper[4741]: I0226 08:35:11.968332 4741 scope.go:117] "RemoveContainer" containerID="ea38f5b32a76aca6f0722006d3fa3be65f7e356e6c7df19b53d8a42990ca47c8" Feb 26 08:35:12 crc kubenswrapper[4741]: E0226 08:35:12.157067 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 26 08:35:12 crc kubenswrapper[4741]: E0226 08:35:12.158359 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4ctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-z8h9r_openstack-operators(6980cc82-375e-4057-8dd6-1518d19891ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:12 crc kubenswrapper[4741]: E0226 08:35:12.159646 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" Feb 26 08:35:13 crc kubenswrapper[4741]: E0226 08:35:13.163725 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 26 08:35:13 crc kubenswrapper[4741]: E0226 08:35:13.164955 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9rlrx_openstack-operators(6c09faf7-6a12-4474-8251-2aa222e9c596): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:35:13 crc kubenswrapper[4741]: E0226 08:35:13.166135 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" podUID="6c09faf7-6a12-4474-8251-2aa222e9c596" Feb 26 08:35:13 crc kubenswrapper[4741]: E0226 08:35:13.291751 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" podUID="6c09faf7-6a12-4474-8251-2aa222e9c596" Feb 26 08:35:13 crc kubenswrapper[4741]: I0226 08:35:13.725369 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5"] Feb 26 08:35:13 crc kubenswrapper[4741]: I0226 08:35:13.834184 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2lglc"] Feb 26 08:35:13 crc kubenswrapper[4741]: I0226 08:35:13.867593 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z"] Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.292326 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" event={"ID":"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed","Type":"ContainerStarted","Data":"15eb2fb815bf5e120908cc628eaef45b9011ef5da10144034d4b2aa922072da5"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.294187 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" event={"ID":"e569c05c-2b4a-448e-8393-65650cdc0d4a","Type":"ContainerStarted","Data":"90ab64f55ed6f4600a1386ffab9a2cf28cda36fd6ff64c125017056eb021afd6"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.294340 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.296978 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" event={"ID":"001f4723-6a83-41ae-ac81-fc17c370a90e","Type":"ContainerStarted","Data":"b84683d6da4337a3b490e69b77dded97af19457f82aa56ce1c78e7a47a367372"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.298988 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" event={"ID":"c9c57ac4-4382-4a2a-b0c7-8985f71ea615","Type":"ContainerStarted","Data":"d59c6873204f4c23c2104cc4d6c3485aeda6a7d667eee25a4349ce674aa7e414"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.302526 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" event={"ID":"10293970-cf7e-4d61-9522-0bbfaa7a872f","Type":"ContainerStarted","Data":"5b8d1744dbbe18420b71228dc7b02160bb6cd164224b3515f432459b16c250e9"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.303362 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.305631 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" event={"ID":"7d9bffe2-0600-47fe-83e6-847d6943a748","Type":"ContainerStarted","Data":"5d9bace7460d318b121114da37a0355e56ac83295ef09064dcdfc5db0f2a3f6e"} Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.391034 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" podStartSLOduration=7.027989496 podStartE2EDuration="1m0.391009732s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.54937304 +0000 UTC m=+1292.545310417" lastFinishedPulling="2026-02-26 08:35:10.912393256 +0000 UTC m=+1345.908330653" observedRunningTime="2026-02-26 08:35:14.379754114 +0000 UTC m=+1349.375691501" watchObservedRunningTime="2026-02-26 08:35:14.391009732 +0000 UTC m=+1349.386947119" Feb 26 08:35:14 crc kubenswrapper[4741]: I0226 08:35:14.402619 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" podStartSLOduration=6.887106835 podStartE2EDuration="1m0.402586779s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.396814729 +0000 UTC m=+1292.392752116" lastFinishedPulling="2026-02-26 08:35:10.912294673 +0000 UTC m=+1345.908232060" observedRunningTime="2026-02-26 08:35:14.398315739 +0000 UTC m=+1349.394253126" watchObservedRunningTime="2026-02-26 08:35:14.402586779 +0000 UTC m=+1349.398524156" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.315906 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.316455 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.316556 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.347430 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podStartSLOduration=5.542552497 podStartE2EDuration="1m1.347411999s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.488459182 +0000 UTC m=+1292.484396569" lastFinishedPulling="2026-02-26 08:35:13.293318684 +0000 UTC m=+1348.289256071" observedRunningTime="2026-02-26 08:35:15.339522066 +0000 UTC m=+1350.335459463" watchObservedRunningTime="2026-02-26 08:35:15.347411999 +0000 UTC m=+1350.343349386" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.356630 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podStartSLOduration=5.643982017 podStartE2EDuration="1m1.356613389s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.590968252 +0000 UTC m=+1292.586905639" lastFinishedPulling="2026-02-26 08:35:13.303599614 +0000 UTC m=+1348.299537011" observedRunningTime="2026-02-26 08:35:15.356434373 +0000 UTC m=+1350.352371770" watchObservedRunningTime="2026-02-26 08:35:15.356613389 +0000 UTC m=+1350.352550776" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.379558 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podStartSLOduration=8.934072916 podStartE2EDuration="1m1.379533766s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.445935624 +0000 UTC m=+1292.441873001" lastFinishedPulling="2026-02-26 08:35:09.891396464 +0000 UTC m=+1344.887333851" observedRunningTime="2026-02-26 08:35:15.373700591 +0000 UTC m=+1350.369637978" watchObservedRunningTime="2026-02-26 08:35:15.379533766 +0000 UTC m=+1350.375471153" Feb 26 08:35:15 crc kubenswrapper[4741]: I0226 08:35:15.394318 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podStartSLOduration=5.070498521 podStartE2EDuration="1m2.394293493s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:15.978904697 +0000 UTC m=+1290.974842084" lastFinishedPulling="2026-02-26 08:35:13.302699669 +0000 UTC m=+1348.298637056" observedRunningTime="2026-02-26 08:35:15.389652462 +0000 UTC m=+1350.385589849" watchObservedRunningTime="2026-02-26 08:35:15.394293493 +0000 UTC m=+1350.390230880" Feb 26 08:35:16 crc kubenswrapper[4741]: W0226 08:35:16.855653 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode374c69c_1959_44c3_839c_2b5897259440.slice/crio-fd7768e72cb4e58b1fc61694f2efa06c089617f2a5df06ca382a1312b918dc7a WatchSource:0}: Error finding container fd7768e72cb4e58b1fc61694f2efa06c089617f2a5df06ca382a1312b918dc7a: Status 404 returned error can't find the container with id fd7768e72cb4e58b1fc61694f2efa06c089617f2a5df06ca382a1312b918dc7a Feb 26 08:35:16 crc kubenswrapper[4741]: W0226 08:35:16.857529 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b43fed_c72c_4b2b_8d4d_0a0b9044d61f.slice/crio-230a4a45917b046462670808fb7f7ef0e10187e8157be69a1e213d95f55eaa2f WatchSource:0}: Error finding container 230a4a45917b046462670808fb7f7ef0e10187e8157be69a1e213d95f55eaa2f: Status 404 returned error can't find the container with id 230a4a45917b046462670808fb7f7ef0e10187e8157be69a1e213d95f55eaa2f Feb 26 08:35:17 crc kubenswrapper[4741]: I0226 08:35:17.338189 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" event={"ID":"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f","Type":"ContainerStarted","Data":"230a4a45917b046462670808fb7f7ef0e10187e8157be69a1e213d95f55eaa2f"} Feb 26 08:35:17 crc kubenswrapper[4741]: I0226 08:35:17.340452 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" event={"ID":"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924","Type":"ContainerStarted","Data":"1c83d3c47a738e4d0935d897188a6adee0a99af0989291b2a8edb206a912f8c8"} Feb 26 08:35:17 crc kubenswrapper[4741]: I0226 08:35:17.342468 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" event={"ID":"e374c69c-1959-44c3-839c-2b5897259440","Type":"ContainerStarted","Data":"fd7768e72cb4e58b1fc61694f2efa06c089617f2a5df06ca382a1312b918dc7a"} Feb 26 08:35:19 crc kubenswrapper[4741]: I0226 08:35:19.365972 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" event={"ID":"6e5158cf-c5d8-46e4-b433-20c6a410bf5e","Type":"ContainerStarted","Data":"e07ecac1e44ff7f09ca9e23517c0c3ca013e2b71ab0f85111f343aa173b86fea"} Feb 26 08:35:19 crc kubenswrapper[4741]: I0226 08:35:19.367739 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" event={"ID":"e3fc347b-349b-4811-8f1e-0281658e669a","Type":"ContainerStarted","Data":"844165297f81ed501d86ec4110dfecd9260524aeb9e8c600a06a40a827643e71"} Feb 26 08:35:19 crc kubenswrapper[4741]: I0226 08:35:19.369367 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" event={"ID":"f4754cdd-d402-4c7e-a0cf-a39549369eb8","Type":"ContainerStarted","Data":"760191e9b29319897382b7f7e151c8c9b4d91207491d98eb8afd79bc68d9d760"} Feb 26 08:35:20 crc kubenswrapper[4741]: I0226 08:35:20.379725 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:35:20 crc kubenswrapper[4741]: I0226 08:35:20.404747 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podStartSLOduration=10.824010787 podStartE2EDuration="1m7.404719691s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.736250657 +0000 UTC m=+1291.732188044" lastFinishedPulling="2026-02-26 08:35:13.316959551 +0000 UTC m=+1348.312896948" observedRunningTime="2026-02-26 08:35:20.397514268 +0000 UTC m=+1355.393451685" watchObservedRunningTime="2026-02-26 08:35:20.404719691 +0000 UTC m=+1355.400657078" Feb 26 08:35:20 crc kubenswrapper[4741]: I0226 08:35:20.424078 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podStartSLOduration=10.042730033 podStartE2EDuration="1m7.424050087s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:15.922886537 +0000 UTC m=+1290.918823934" lastFinishedPulling="2026-02-26 08:35:13.304206601 +0000 UTC m=+1348.300143988" observedRunningTime="2026-02-26 08:35:20.419219771 +0000 UTC m=+1355.415157168" watchObservedRunningTime="2026-02-26 08:35:20.424050087 +0000 UTC m=+1355.419987474" Feb 26 08:35:20 crc kubenswrapper[4741]: I0226 08:35:20.451727 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" podStartSLOduration=10.979602347 podStartE2EDuration="1m7.451701479s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.843078499 +0000 UTC m=+1291.839015886" lastFinishedPulling="2026-02-26 08:35:13.315177621 +0000 UTC m=+1348.311115018" observedRunningTime="2026-02-26 08:35:20.440967855 +0000 UTC m=+1355.436905272" watchObservedRunningTime="2026-02-26 08:35:20.451701479 +0000 UTC m=+1355.447638886" Feb 26 08:35:22 crc kubenswrapper[4741]: E0226 08:35:22.767619 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" Feb 26 08:35:23 crc kubenswrapper[4741]: I0226 08:35:23.934759 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:35:24 crc kubenswrapper[4741]: E0226 08:35:24.003537 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.301646 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.305428 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.337402 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.346835 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.444404 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" Feb 26 08:35:24 crc kubenswrapper[4741]: I0226 08:35:24.848000 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.037792 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.149929 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.150016 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.359768 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.391413 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 08:35:25 crc kubenswrapper[4741]: I0226 08:35:25.714505 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 08:35:25 crc kubenswrapper[4741]: E0226 08:35:25.796360 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" Feb 26 08:35:32 crc kubenswrapper[4741]: I0226 08:35:32.518844 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" event={"ID":"e374c69c-1959-44c3-839c-2b5897259440","Type":"ContainerStarted","Data":"32022dcf48c6dd14f022e51f02cf953a68ea895e15041cf4cd5801118781f8f5"} Feb 26 08:35:32 crc kubenswrapper[4741]: I0226 08:35:32.520072 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:35:32 crc kubenswrapper[4741]: I0226 08:35:32.565283 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" podStartSLOduration=78.565243656 podStartE2EDuration="1m18.565243656s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:35:32.552290449 +0000 UTC m=+1367.548227866" watchObservedRunningTime="2026-02-26 08:35:32.565243656 +0000 UTC m=+1367.561181053" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.528862 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" event={"ID":"ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6","Type":"ContainerStarted","Data":"b5c1423cdea3ca254fcc65ad5d1e80a7c00339b1f460543f23026c1c1af2f53e"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.530299 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.531918 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" event={"ID":"e97b1690-b880-4c0d-9e36-484d2abf0e8e","Type":"ContainerStarted","Data":"d6c7ce30a0168b048117b3a9c5750be8df9de51897bc2d83c45e5c4a7894f9d6"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.533009 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.539839 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" event={"ID":"aafef34e-4723-41d4-a28e-634f4ba80bea","Type":"ContainerStarted","Data":"4f5989b9386f4b2963bd3affc41d123322dc1e53b1024340b431b9ffc386b103"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.540681 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.545273 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" event={"ID":"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe","Type":"ContainerStarted","Data":"1fff9b9a5530bf760707fdd0067cfdfeb6b72d9e4004a8208f1a299c1a2eb105"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.546286 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.548758 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" event={"ID":"6c09faf7-6a12-4474-8251-2aa222e9c596","Type":"ContainerStarted","Data":"97bb0d2d64b32fa754aad0396198d300b5bc0a355d96fc8b760e837491b42296"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.551052 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" event={"ID":"3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb","Type":"ContainerStarted","Data":"94b1bd6e6dc1eabe2e87eb4093f3618d1f61e40107ffed91ea68f2c3bfb30faf"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.551584 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.552958 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" event={"ID":"8520f5ec-d0e0-4bc0-a10b-dfb5157c5924","Type":"ContainerStarted","Data":"91303de35e6983ba59f4bd335b6e93d82d50fb997daa645ed2e1411503c2be2f"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.553507 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.554964 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" event={"ID":"c40047b0-d115-4a5f-aa50-d888eafff094","Type":"ContainerStarted","Data":"5876dd45b0869ed946c2319b52d50d389073bbf3f06a89aa5b191023aadc19de"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.555537 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.561584 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" event={"ID":"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f","Type":"ContainerStarted","Data":"0b9b18c2d08770989e810e4ef07d1cf318f0aac4a9b954e4ecf5ae9506da6f29"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.562182 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.564465 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" event={"ID":"0d69cf5a-6ccc-4c66-a767-fd837ea440a3","Type":"ContainerStarted","Data":"524ed0006c9776877eca0cbdc06c758d32e505ff605de70715f196a841477350"} Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.564885 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.580740 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podStartSLOduration=7.493082212 podStartE2EDuration="1m20.580722594s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.670184684 +0000 UTC m=+1291.666122071" lastFinishedPulling="2026-02-26 08:35:29.757825046 +0000 UTC m=+1364.753762453" observedRunningTime="2026-02-26 08:35:33.57387289 +0000 UTC m=+1368.569810287" watchObservedRunningTime="2026-02-26 08:35:33.580722594 +0000 UTC m=+1368.576659981" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.610213 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" podStartSLOduration=64.340053998 podStartE2EDuration="1m19.610187398s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:35:16.860297496 +0000 UTC m=+1351.856234883" lastFinishedPulling="2026-02-26 08:35:32.130430896 +0000 UTC m=+1367.126368283" observedRunningTime="2026-02-26 08:35:33.602436389 +0000 UTC m=+1368.598373776" watchObservedRunningTime="2026-02-26 08:35:33.610187398 +0000 UTC m=+1368.606124785" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.643691 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podStartSLOduration=8.434290094 podStartE2EDuration="1m20.643673646s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.548855985 +0000 UTC m=+1292.544793372" lastFinishedPulling="2026-02-26 08:35:29.758239537 +0000 UTC m=+1364.754176924" observedRunningTime="2026-02-26 08:35:33.641047442 +0000 UTC m=+1368.636984839" watchObservedRunningTime="2026-02-26 08:35:33.643673646 +0000 UTC m=+1368.639611033" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.669703 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" podStartSLOduration=7.665614517 podStartE2EDuration="1m20.669681142s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.753702929 +0000 UTC m=+1291.749640316" lastFinishedPulling="2026-02-26 08:35:29.757769554 +0000 UTC m=+1364.753706941" observedRunningTime="2026-02-26 08:35:33.660627526 +0000 UTC m=+1368.656564913" watchObservedRunningTime="2026-02-26 08:35:33.669681142 +0000 UTC m=+1368.665618529" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.684830 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" podStartSLOduration=5.311440998 podStartE2EDuration="1m20.68480302s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.758779422 +0000 UTC m=+1291.754716809" lastFinishedPulling="2026-02-26 08:35:32.132141434 +0000 UTC m=+1367.128078831" observedRunningTime="2026-02-26 08:35:33.684179743 +0000 UTC m=+1368.680117130" watchObservedRunningTime="2026-02-26 08:35:33.68480302 +0000 UTC m=+1368.680740407" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.709183 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" podStartSLOduration=71.346004484 podStartE2EDuration="1m20.709102078s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:35:22.767463026 +0000 UTC m=+1357.763400413" lastFinishedPulling="2026-02-26 08:35:32.1305606 +0000 UTC m=+1367.126498007" observedRunningTime="2026-02-26 08:35:33.704788036 +0000 UTC m=+1368.700725423" watchObservedRunningTime="2026-02-26 08:35:33.709102078 +0000 UTC m=+1368.705039465" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.735096 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podStartSLOduration=8.932619009 podStartE2EDuration="1m20.735073864s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:15.987079127 +0000 UTC m=+1290.983016524" lastFinishedPulling="2026-02-26 08:35:27.789533992 +0000 UTC m=+1362.785471379" observedRunningTime="2026-02-26 08:35:33.727645203 +0000 UTC m=+1368.723582590" watchObservedRunningTime="2026-02-26 08:35:33.735073864 +0000 UTC m=+1368.731011251" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.755307 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" podStartSLOduration=5.317695404 podStartE2EDuration="1m19.755278406s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.692833414 +0000 UTC m=+1292.688770801" lastFinishedPulling="2026-02-26 08:35:32.130416396 +0000 UTC m=+1367.126353803" observedRunningTime="2026-02-26 08:35:33.747623329 +0000 UTC m=+1368.743560716" watchObservedRunningTime="2026-02-26 08:35:33.755278406 +0000 UTC m=+1368.751215793" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.809438 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" podStartSLOduration=9.712694138 podStartE2EDuration="1m20.809407378s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.693019578 +0000 UTC m=+1291.688956965" lastFinishedPulling="2026-02-26 08:35:27.789732808 +0000 UTC m=+1362.785670205" observedRunningTime="2026-02-26 08:35:33.799661952 +0000 UTC m=+1368.795599349" watchObservedRunningTime="2026-02-26 08:35:33.809407378 +0000 UTC m=+1368.805344765" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.835474 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" podStartSLOduration=5.459887312 podStartE2EDuration="1m20.835454546s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:16.753455192 +0000 UTC m=+1291.749392579" lastFinishedPulling="2026-02-26 08:35:32.129022426 +0000 UTC m=+1367.124959813" observedRunningTime="2026-02-26 08:35:33.832037439 +0000 UTC m=+1368.827974826" watchObservedRunningTime="2026-02-26 08:35:33.835454546 +0000 UTC m=+1368.831391933" Feb 26 08:35:33 crc kubenswrapper[4741]: I0226 08:35:33.940908 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" Feb 26 08:35:36 crc kubenswrapper[4741]: I0226 08:35:36.600680 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" event={"ID":"dbdb4143-6ca6-4468-ae59-db0a15ae9229","Type":"ContainerStarted","Data":"0685ca81cde53718145c514af2878d1cead37cf5ba454d3f956644126c375c59"} Feb 26 08:35:36 crc kubenswrapper[4741]: I0226 08:35:36.603302 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:35:36 crc kubenswrapper[4741]: I0226 08:35:36.636196 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podStartSLOduration=4.476152465 podStartE2EDuration="1m22.636156935s" podCreationTimestamp="2026-02-26 08:34:14 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.647428794 +0000 UTC m=+1292.643366181" lastFinishedPulling="2026-02-26 08:35:35.807433224 +0000 UTC m=+1370.803370651" observedRunningTime="2026-02-26 08:35:36.622065936 +0000 UTC m=+1371.618003343" watchObservedRunningTime="2026-02-26 08:35:36.636156935 +0000 UTC m=+1371.632094322" Feb 26 08:35:37 crc kubenswrapper[4741]: I0226 08:35:37.581448 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" Feb 26 08:35:37 crc kubenswrapper[4741]: I0226 08:35:37.617317 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" event={"ID":"b2c3a19d-a170-476f-a589-e7cde492ac1d","Type":"ContainerStarted","Data":"f02d32e5799678d05771327a01fdb48199a5edaeae64a3a98a36669a428a3ef2"} Feb 26 08:35:37 crc kubenswrapper[4741]: I0226 08:35:37.617643 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:35:37 crc kubenswrapper[4741]: I0226 08:35:37.654709 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podStartSLOduration=3.498578545 podStartE2EDuration="1m24.65468452s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:15.428817569 +0000 UTC m=+1290.424754966" lastFinishedPulling="2026-02-26 08:35:36.584923534 +0000 UTC m=+1371.580860941" observedRunningTime="2026-02-26 08:35:37.650388508 +0000 UTC m=+1372.646325915" watchObservedRunningTime="2026-02-26 08:35:37.65468452 +0000 UTC m=+1372.650621907" Feb 26 08:35:39 crc kubenswrapper[4741]: I0226 08:35:39.665198 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" event={"ID":"6980cc82-375e-4057-8dd6-1518d19891ed","Type":"ContainerStarted","Data":"65118c356c3984d61c9c8be66ea9c9016b660c8a5a6d13c30e75a8496698d159"} Feb 26 08:35:39 crc kubenswrapper[4741]: I0226 08:35:39.666368 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:35:39 crc kubenswrapper[4741]: I0226 08:35:39.689039 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podStartSLOduration=5.3329137509999995 podStartE2EDuration="1m26.689013154s" podCreationTimestamp="2026-02-26 08:34:13 +0000 UTC" firstStartedPulling="2026-02-26 08:34:17.396361716 +0000 UTC m=+1292.392299103" lastFinishedPulling="2026-02-26 08:35:38.752461119 +0000 UTC m=+1373.748398506" observedRunningTime="2026-02-26 08:35:39.683729114 +0000 UTC m=+1374.679666501" watchObservedRunningTime="2026-02-26 08:35:39.689013154 +0000 UTC m=+1374.684950541" Feb 26 08:35:40 crc kubenswrapper[4741]: I0226 08:35:40.998515 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 08:35:43 crc kubenswrapper[4741]: I0226 08:35:43.981091 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.190930 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.468385 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.710049 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.710903 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.738081 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.739077 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.739440 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" Feb 26 08:35:44 crc kubenswrapper[4741]: I0226 08:35:44.742884 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" Feb 26 08:35:45 crc kubenswrapper[4741]: I0226 08:35:45.344407 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" Feb 26 08:35:46 crc kubenswrapper[4741]: I0226 08:35:46.157506 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" Feb 26 08:35:55 crc kubenswrapper[4741]: I0226 08:35:55.150092 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:35:55 crc kubenswrapper[4741]: I0226 08:35:55.151009 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.145482 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534916-hwf5d"] Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.150213 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.153028 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.153660 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.154375 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.164850 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534916-hwf5d"] Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.271770 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbrd\" (UniqueName: \"kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd\") pod \"auto-csr-approver-29534916-hwf5d\" (UID: \"e7f5b396-e468-4a73-b1e3-258af5766c4c\") " pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.375058 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbrd\" (UniqueName: \"kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd\") pod \"auto-csr-approver-29534916-hwf5d\" (UID: \"e7f5b396-e468-4a73-b1e3-258af5766c4c\") " pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.412749 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbrd\" (UniqueName: \"kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd\") pod \"auto-csr-approver-29534916-hwf5d\" (UID: \"e7f5b396-e468-4a73-b1e3-258af5766c4c\") " pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.473991 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:00 crc kubenswrapper[4741]: I0226 08:36:00.975779 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534916-hwf5d"] Feb 26 08:36:01 crc kubenswrapper[4741]: I0226 08:36:01.910985 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" event={"ID":"e7f5b396-e468-4a73-b1e3-258af5766c4c","Type":"ContainerStarted","Data":"51368ee01127db4e338cd6dc796397254adea1d0b725b1e5ebfea9d52aea34e8"} Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.109386 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.113871 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.116576 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.116768 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zdzl6" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.116866 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.116929 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.128885 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.187488 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.189178 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.190924 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.208441 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.242081 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.242199 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.242320 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp9t\" (UniqueName: \"kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.242765 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.242871 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbhph\" (UniqueName: \"kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.344405 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.345159 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbhph\" (UniqueName: \"kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.345741 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.345889 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.346826 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvp9t\" (UniqueName: \"kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.346338 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.346766 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.345389 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.366641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbhph\" (UniqueName: \"kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph\") pod \"dnsmasq-dns-675f4bcbfc-g29mj\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.377462 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvp9t\" (UniqueName: \"kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t\") pod \"dnsmasq-dns-78dd6ddcc-v5j5l\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.435407 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.511322 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:36:02 crc kubenswrapper[4741]: I0226 08:36:02.971327 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:36:02 crc kubenswrapper[4741]: W0226 08:36:02.971328 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8319c09_a1ef_4694_a3f2_ae3d49ab3e53.slice/crio-881d8d26c6f029355b5046a9f1633b38a0aa6cee62091f789da515e972d47dc3 WatchSource:0}: Error finding container 881d8d26c6f029355b5046a9f1633b38a0aa6cee62091f789da515e972d47dc3: Status 404 returned error can't find the container with id 881d8d26c6f029355b5046a9f1633b38a0aa6cee62091f789da515e972d47dc3 Feb 26 08:36:03 crc kubenswrapper[4741]: I0226 08:36:03.974326 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" event={"ID":"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53","Type":"ContainerStarted","Data":"881d8d26c6f029355b5046a9f1633b38a0aa6cee62091f789da515e972d47dc3"} Feb 26 08:36:04 crc kubenswrapper[4741]: I0226 08:36:04.985632 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:36:05 crc kubenswrapper[4741]: W0226 08:36:05.161808 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e7cfc3f_3f88_42ed_87a8_6a726a9c2d37.slice/crio-e75e16a7c3f110f2a400a44dcc2e97083f9762bd96123ce74ef884a3e13c341e WatchSource:0}: Error finding container e75e16a7c3f110f2a400a44dcc2e97083f9762bd96123ce74ef884a3e13c341e: Status 404 returned error can't find the container with id e75e16a7c3f110f2a400a44dcc2e97083f9762bd96123ce74ef884a3e13c341e Feb 26 08:36:05 crc kubenswrapper[4741]: I0226 08:36:05.162054 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:36:05 crc kubenswrapper[4741]: I0226 08:36:05.820792 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:36:05 crc kubenswrapper[4741]: I0226 08:36:05.860060 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:36:05 crc kubenswrapper[4741]: I0226 08:36:05.862425 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:05 crc kubenswrapper[4741]: I0226 08:36:05.876247 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.026428 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" event={"ID":"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37","Type":"ContainerStarted","Data":"e75e16a7c3f110f2a400a44dcc2e97083f9762bd96123ce74ef884a3e13c341e"} Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.040343 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9s4\" (UniqueName: \"kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.040861 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.041205 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.143506 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.143917 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.144333 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9s4\" (UniqueName: \"kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.145124 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.145161 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.170186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9s4\" (UniqueName: \"kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4\") pod \"dnsmasq-dns-666b6646f7-krvtm\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.180750 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.249773 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.283489 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.288994 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.451145 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.451594 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6gqc\" (UniqueName: \"kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.451697 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.466721 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.558434 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.558972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6gqc\" (UniqueName: \"kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.559003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.560733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.562013 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.606134 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6gqc\" (UniqueName: \"kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc\") pod \"dnsmasq-dns-57d769cc4f-8xgsf\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.781178 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.985963 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.988877 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:36:06 crc kubenswrapper[4741]: I0226 08:36:06.999808 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.000148 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.000317 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.001277 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.001658 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.001840 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.003289 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-trgtr" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.044579 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.056260 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.061878 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.064370 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.074786 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.074897 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.074937 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075000 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075022 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075065 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7ks\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075125 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075172 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075235 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075271 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.075313 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.141199 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.157845 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" event={"ID":"e7f5b396-e468-4a73-b1e3-258af5766c4c","Type":"ContainerStarted","Data":"175267b7510bc89015e793f52051f1c8364c1b56302ecbf0c623a22ea13b77d8"} Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.187890 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190420 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190509 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190552 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190586 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190626 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190667 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190697 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190734 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190766 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7ks\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190797 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190851 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190888 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrbt\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190919 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190944 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.190976 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191000 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191031 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191066 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191139 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191191 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191228 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191257 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191285 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191319 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191354 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191391 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191424 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191454 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rjcr\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191501 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.191537 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.211927 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.212584 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.213400 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.214298 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.214971 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.214984 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.215076 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.215395 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.252126 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.257591 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.267855 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.278023 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.326247 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.327961 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7ks\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.341626 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.341697 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57beecbda7054f70039ef944bb56736e90d719c0f9e55f6bbb987ff859fc9f8b/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.427243 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.427612 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.437817 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.438639 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.439169 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442564 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrbt\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442616 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442661 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442697 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442727 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442798 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442877 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442985 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443026 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443053 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443121 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443181 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443235 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443256 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rjcr\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443331 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443389 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.443416 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.444366 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.442342 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.451726 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.453528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.456383 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.456567 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.458802 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.460820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.466807 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.471755 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.472642 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.473675 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.474449 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.487487 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.488908 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.489402 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.492017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrbt\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.492668 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.492732 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/90a9f1f67587bf38d415b7fdc0210d07aee6e17315e9a5e9ad6c5d6b568aaaf6/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.496959 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.497121 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b25ae51cf46b80f139cc98e7ff2e70fbe8dd51bf375cc8bf062bffee99caeec/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.499437 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.506171 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rjcr\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.508279 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.513445 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.557085 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.605440 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" podStartSLOduration=3.433591205 podStartE2EDuration="7.605077783s" podCreationTimestamp="2026-02-26 08:36:00 +0000 UTC" firstStartedPulling="2026-02-26 08:36:00.988319309 +0000 UTC m=+1395.984256696" lastFinishedPulling="2026-02-26 08:36:05.159805887 +0000 UTC m=+1400.155743274" observedRunningTime="2026-02-26 08:36:07.35345557 +0000 UTC m=+1402.349392977" watchObservedRunningTime="2026-02-26 08:36:07.605077783 +0000 UTC m=+1402.601015170" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.624241 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.628254 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.631529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.634935 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.635103 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.640203 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2cwkd" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.640434 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.640598 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.644549 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.645028 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.645227 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.645419 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.655232 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.656580 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.661400 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.666817 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.673645 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.674202 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.674430 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jhpwp" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.674438 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.693199 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.721441 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.736019 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768291 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768342 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768367 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768407 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768435 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768451 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768476 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768514 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768549 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768575 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csx5\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768593 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25b2m\" (UniqueName: \"kubernetes.io/projected/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kube-api-access-25b2m\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768614 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768639 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.768900 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.769196 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.769698 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.769751 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.769777 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.769877 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.775322 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:36:07 crc kubenswrapper[4741]: W0226 08:36:07.826414 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d8e026_9359_4dfe_b9b0_01857c576cc5.slice/crio-110bd7dabf809866f130c5e44e592344b19c7016b33062f6891a94ca8b9b9c96 WatchSource:0}: Error finding container 110bd7dabf809866f130c5e44e592344b19c7016b33062f6891a94ca8b9b9c96: Status 404 returned error can't find the container with id 110bd7dabf809866f130c5e44e592344b19c7016b33062f6891a94ca8b9b9c96 Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872717 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872782 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872816 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csx5\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872839 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25b2m\" (UniqueName: \"kubernetes.io/projected/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kube-api-access-25b2m\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872859 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872890 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872964 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.872995 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873022 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873042 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873072 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873101 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873141 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873166 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873212 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873252 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873276 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873303 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.873975 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.875369 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.878219 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.878544 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.878800 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.879581 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.880084 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.883004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.887718 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.888412 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.888974 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.889276 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.896034 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csx5\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.896808 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897169 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897173 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897193 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5532a28ef612f83045cf4f4ebbea29c6f0b36252e4e24472e1ee6101252caf90/globalmount\"" pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897212 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4a48402aa3ae82a7c0531ca5bb6b953c670f31e961f44e8dfbb6dc451a9362d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897256 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.897763 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.907876 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25b2m\" (UniqueName: \"kubernetes.io/projected/ed8ae863-261b-4cbd-945a-b79c99fa0a9f-kube-api-access-25b2m\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:07 crc kubenswrapper[4741]: I0226 08:36:07.994937 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.004771 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a55e1d48-c9fe-4a5d-909f-a9b05896e3ec\") pod \"openstack-galera-0\" (UID: \"ed8ae863-261b-4cbd-945a-b79c99fa0a9f\") " pod="openstack/openstack-galera-0" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.030405 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.182702 4741 generic.go:334] "Generic (PLEG): container finished" podID="e7f5b396-e468-4a73-b1e3-258af5766c4c" containerID="175267b7510bc89015e793f52051f1c8364c1b56302ecbf0c623a22ea13b77d8" exitCode=0 Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.183258 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" event={"ID":"e7f5b396-e468-4a73-b1e3-258af5766c4c","Type":"ContainerDied","Data":"175267b7510bc89015e793f52051f1c8364c1b56302ecbf0c623a22ea13b77d8"} Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.193851 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" event={"ID":"ce3a54c1-256c-4209-bbe4-07b9d62be849","Type":"ContainerStarted","Data":"dc482b77d92a89d50fec025f6731316afb8fe281d228224cdaac4887827e4e9a"} Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.214961 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" event={"ID":"97d8e026-9359-4dfe-b9b0-01857c576cc5","Type":"ContainerStarted","Data":"110bd7dabf809866f130c5e44e592344b19c7016b33062f6891a94ca8b9b9c96"} Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.295270 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.552311 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.643151 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.807037 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 08:36:08 crc kubenswrapper[4741]: W0226 08:36:08.827079 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded8ae863_261b_4cbd_945a_b79c99fa0a9f.slice/crio-70198911e84d2fe387d3795be89f8aeea9efc4abc5721a638a6f4c6459b71ddf WatchSource:0}: Error finding container 70198911e84d2fe387d3795be89f8aeea9efc4abc5721a638a6f4c6459b71ddf: Status 404 returned error can't find the container with id 70198911e84d2fe387d3795be89f8aeea9efc4abc5721a638a6f4c6459b71ddf Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.870802 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.953936 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.960363 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.963430 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.963693 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lmdw6" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.963694 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 08:36:08 crc kubenswrapper[4741]: I0226 08:36:08.963753 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.003050 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079600 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079710 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-090e3a81-145d-42f3-a536-0445b121f985\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090e3a81-145d-42f3-a536-0445b121f985\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079750 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079776 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079836 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079885 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.079936 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.080024 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6j54\" (UniqueName: \"kubernetes.io/projected/2b1496b8-9f14-472d-af02-7357f75ba7cf-kube-api-access-t6j54\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.103142 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195270 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6j54\" (UniqueName: \"kubernetes.io/projected/2b1496b8-9f14-472d-af02-7357f75ba7cf-kube-api-access-t6j54\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195335 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195390 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-090e3a81-145d-42f3-a536-0445b121f985\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090e3a81-145d-42f3-a536-0445b121f985\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195421 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195448 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195506 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195552 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.195603 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.196124 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.197065 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.197999 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.198376 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1496b8-9f14-472d-af02-7357f75ba7cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.207048 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.207129 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-090e3a81-145d-42f3-a536-0445b121f985\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090e3a81-145d-42f3-a536-0445b121f985\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6fbec4451253a535ccc1ef865697a89d29b3fd4f10c18de659762426355fbdb/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.211475 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.214188 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1496b8-9f14-472d-af02-7357f75ba7cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.229383 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6j54\" (UniqueName: \"kubernetes.io/projected/2b1496b8-9f14-472d-af02-7357f75ba7cf-kube-api-access-t6j54\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.295197 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.297456 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.301605 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.306465 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerStarted","Data":"d4d6a418c49651b93af53b4cbaa074ed84577723e3444a12554ac13e7ee9e00b"} Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.307869 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerStarted","Data":"740f6a48d4941d4ec2f62286a914924f22f284d7fb520b8188a19b4f456cdc74"} Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.308222 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.310697 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerStarted","Data":"9c7a0e5e79e649be99fa0674cc554c94490e3e8dff9a81f5fed75f9b4979d27c"} Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.312018 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5s4h5" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.312853 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.313276 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ed8ae863-261b-4cbd-945a-b79c99fa0a9f","Type":"ContainerStarted","Data":"70198911e84d2fe387d3795be89f8aeea9efc4abc5721a638a6f4c6459b71ddf"} Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.330434 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerStarted","Data":"03a6d04ad5cacbab95bae5a1cbbd16d5b6029bff4801768920ac7e093d6daa32"} Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.357476 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-090e3a81-145d-42f3-a536-0445b121f985\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090e3a81-145d-42f3-a536-0445b121f985\") pod \"openstack-cell1-galera-0\" (UID: \"2b1496b8-9f14-472d-af02-7357f75ba7cf\") " pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.513286 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-kolla-config\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.513362 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-config-data\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.513439 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rl98\" (UniqueName: \"kubernetes.io/projected/d7ee427f-ada6-4496-a314-c5cd63abefcd-kube-api-access-6rl98\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.513508 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.513571 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.596016 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.614815 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-kolla-config\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.614881 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-config-data\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.614928 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rl98\" (UniqueName: \"kubernetes.io/projected/d7ee427f-ada6-4496-a314-c5cd63abefcd-kube-api-access-6rl98\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.614972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.615011 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.617586 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-config-data\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.628751 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7ee427f-ada6-4496-a314-c5cd63abefcd-kolla-config\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.632083 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.643300 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ee427f-ada6-4496-a314-c5cd63abefcd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.650574 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rl98\" (UniqueName: \"kubernetes.io/projected/d7ee427f-ada6-4496-a314-c5cd63abefcd-kube-api-access-6rl98\") pod \"memcached-0\" (UID: \"d7ee427f-ada6-4496-a314-c5cd63abefcd\") " pod="openstack/memcached-0" Feb 26 08:36:09 crc kubenswrapper[4741]: I0226 08:36:09.682732 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.029637 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.130395 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwbrd\" (UniqueName: \"kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd\") pod \"e7f5b396-e468-4a73-b1e3-258af5766c4c\" (UID: \"e7f5b396-e468-4a73-b1e3-258af5766c4c\") " Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.134757 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd" (OuterVolumeSpecName: "kube-api-access-hwbrd") pod "e7f5b396-e468-4a73-b1e3-258af5766c4c" (UID: "e7f5b396-e468-4a73-b1e3-258af5766c4c"). InnerVolumeSpecName "kube-api-access-hwbrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.233209 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwbrd\" (UniqueName: \"kubernetes.io/projected/e7f5b396-e468-4a73-b1e3-258af5766c4c-kube-api-access-hwbrd\") on node \"crc\" DevicePath \"\"" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.378485 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" event={"ID":"e7f5b396-e468-4a73-b1e3-258af5766c4c","Type":"ContainerDied","Data":"51368ee01127db4e338cd6dc796397254adea1d0b725b1e5ebfea9d52aea34e8"} Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.378569 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534916-hwf5d" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.378625 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51368ee01127db4e338cd6dc796397254adea1d0b725b1e5ebfea9d52aea34e8" Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.550023 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 08:36:10 crc kubenswrapper[4741]: I0226 08:36:10.582740 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.142522 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534910-wqbsz"] Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.160674 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534910-wqbsz"] Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.434311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d7ee427f-ada6-4496-a314-c5cd63abefcd","Type":"ContainerStarted","Data":"ef7a24d50781c805922b2c9d86e832ea04e55bb330699984631009705e1bf0ca"} Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.442932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b1496b8-9f14-472d-af02-7357f75ba7cf","Type":"ContainerStarted","Data":"ae02cc08c6c1b417b8e121f8aba866d76928b95f6491388d096e557307154205"} Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.886172 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba5bd55-de2f-4879-b367-dadffdd11853" path="/var/lib/kubelet/pods/eba5bd55-de2f-4879-b367-dadffdd11853/volumes" Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.887076 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:36:11 crc kubenswrapper[4741]: E0226 08:36:11.887468 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f5b396-e468-4a73-b1e3-258af5766c4c" containerName="oc" Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.887481 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f5b396-e468-4a73-b1e3-258af5766c4c" containerName="oc" Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.887716 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f5b396-e468-4a73-b1e3-258af5766c4c" containerName="oc" Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.890146 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.890245 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:36:11 crc kubenswrapper[4741]: I0226 08:36:11.908401 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7gpqp" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.025812 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfh68\" (UniqueName: \"kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68\") pod \"kube-state-metrics-0\" (UID: \"90779734-9d35-47b9-ac0b-dbf02e3453a5\") " pod="openstack/kube-state-metrics-0" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.128862 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfh68\" (UniqueName: \"kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68\") pod \"kube-state-metrics-0\" (UID: \"90779734-9d35-47b9-ac0b-dbf02e3453a5\") " pod="openstack/kube-state-metrics-0" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.178809 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfh68\" (UniqueName: \"kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68\") pod \"kube-state-metrics-0\" (UID: \"90779734-9d35-47b9-ac0b-dbf02e3453a5\") " pod="openstack/kube-state-metrics-0" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.230957 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.709781 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t"] Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.720714 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.741611 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-gnr5c" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.755509 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.765162 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jrd\" (UniqueName: \"kubernetes.io/projected/432351e8-6adb-434f-b110-c141a4123d2c-kube-api-access-s5jrd\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.765218 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.789407 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t"] Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.869938 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jrd\" (UniqueName: \"kubernetes.io/projected/432351e8-6adb-434f-b110-c141a4123d2c-kube-api-access-s5jrd\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.870001 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:12 crc kubenswrapper[4741]: E0226 08:36:12.874648 4741 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 26 08:36:12 crc kubenswrapper[4741]: E0226 08:36:12.874718 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert podName:432351e8-6adb-434f-b110-c141a4123d2c nodeName:}" failed. No retries permitted until 2026-02-26 08:36:13.374694929 +0000 UTC m=+1408.370632326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-8jc6t" (UID: "432351e8-6adb-434f-b110-c141a4123d2c") : secret "observability-ui-dashboards" not found Feb 26 08:36:12 crc kubenswrapper[4741]: I0226 08:36:12.964084 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jrd\" (UniqueName: \"kubernetes.io/projected/432351e8-6adb-434f-b110-c141a4123d2c-kube-api-access-s5jrd\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.113784 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.122638 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.155630 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.155679 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.155849 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.155890 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.155895 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dhpr9" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.156012 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.158266 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.164360 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.242204 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291254 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291312 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291335 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291371 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291404 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrgw\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291437 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291456 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291481 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291579 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.291596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.305274 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393036 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-879b4584-zh2v7"] Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393602 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393682 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393770 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393794 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393814 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393839 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393874 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrgw\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393903 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393923 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.393945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.400832 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.402051 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.408195 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.412496 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.413686 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432351e8-6adb-434f-b110-c141a4123d2c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8jc6t\" (UID: \"432351e8-6adb-434f-b110-c141a4123d2c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.419733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.420704 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.420806 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/97a84e84ad55906c3177876b2db4bcb0d96d5603cd9c39a8a1a4f5e259f7d9f9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.421039 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.423990 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.425684 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.432216 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.436479 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-879b4584-zh2v7"] Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.458389 4741 scope.go:117] "RemoveContainer" containerID="170dde72cf961223739bba17e82b12705354d780c4e7702e9fb54a910ca80b36" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.472807 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrgw\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.529546 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.529716 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-console-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.529809 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc929\" (UniqueName: \"kubernetes.io/projected/6958fe99-167d-43b2-a0e2-e141e980f982-kube-api-access-fc929\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.529835 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-trusted-ca-bundle\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.530067 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-oauth-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.530191 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-service-ca\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.530288 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-oauth-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.598792 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90779734-9d35-47b9-ac0b-dbf02e3453a5","Type":"ContainerStarted","Data":"1544e4551c264026fd5032717ea9256d572bd947af9fbda87ccfe1aeb3af9af5"} Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635507 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-console-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635583 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc929\" (UniqueName: \"kubernetes.io/projected/6958fe99-167d-43b2-a0e2-e141e980f982-kube-api-access-fc929\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635617 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-trusted-ca-bundle\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635673 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-oauth-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635723 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-service-ca\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635770 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-oauth-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.635866 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.638020 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-trusted-ca-bundle\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.639829 4741 scope.go:117] "RemoveContainer" containerID="c4a67fb2a22c91ab267a92fe60ca50b8a9d5a0a6c8489477930c38650625c01f" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.640479 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-service-ca\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.640549 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-console-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.641199 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6958fe99-167d-43b2-a0e2-e141e980f982-oauth-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.647633 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-serving-cert\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.661970 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc929\" (UniqueName: \"kubernetes.io/projected/6958fe99-167d-43b2-a0e2-e141e980f982-kube-api-access-fc929\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.680861 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6958fe99-167d-43b2-a0e2-e141e980f982-console-oauth-config\") pod \"console-879b4584-zh2v7\" (UID: \"6958fe99-167d-43b2-a0e2-e141e980f982\") " pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.696540 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.701277 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.786677 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.855894 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:13 crc kubenswrapper[4741]: I0226 08:36:13.929815 4741 scope.go:117] "RemoveContainer" containerID="ef83b6e4366b50df5b55e74dbfc472c16e6de52373363590ab60a25a0dfe89f0" Feb 26 08:36:14 crc kubenswrapper[4741]: I0226 08:36:14.025358 4741 scope.go:117] "RemoveContainer" containerID="a82dea94c325d1920446c5c6437acab69390ff987bc75afea5416c4539872df7" Feb 26 08:36:14 crc kubenswrapper[4741]: I0226 08:36:14.573471 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t"] Feb 26 08:36:14 crc kubenswrapper[4741]: I0226 08:36:14.827165 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-879b4584-zh2v7"] Feb 26 08:36:14 crc kubenswrapper[4741]: I0226 08:36:14.842782 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:36:14 crc kubenswrapper[4741]: W0226 08:36:14.923327 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e870a59_585e_4369_88d5_644e5034ad33.slice/crio-40259b1847451ba52844b69331bf34b077985ea24727b028d283ec7d4a9e4d21 WatchSource:0}: Error finding container 40259b1847451ba52844b69331bf34b077985ea24727b028d283ec7d4a9e4d21: Status 404 returned error can't find the container with id 40259b1847451ba52844b69331bf34b077985ea24727b028d283ec7d4a9e4d21 Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.780257 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" event={"ID":"432351e8-6adb-434f-b110-c141a4123d2c","Type":"ContainerStarted","Data":"de9b8203e5805714a18b696c18530738f773dbd87265c134052379231057f88d"} Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.857940 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.870446 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerStarted","Data":"40259b1847451ba52844b69331bf34b077985ea24727b028d283ec7d4a9e4d21"} Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.870502 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-879b4584-zh2v7" event={"ID":"6958fe99-167d-43b2-a0e2-e141e980f982","Type":"ContainerStarted","Data":"e7814242ffa9d09dfc66f774cffc33b6fdd6e4977c230c0d6baf8b2ccb263472"} Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.870524 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.871060 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.877093 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.877348 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.877197 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.877607 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.877623 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gkd2z" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.952773 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-stlzj"] Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.963215 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.968216 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rnp4l" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.970606 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 08:36:15 crc kubenswrapper[4741]: I0226 08:36:15.972951 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.030314 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-stlzj"] Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046626 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046675 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046822 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xbf\" (UniqueName: \"kubernetes.io/projected/85476d1c-5870-4efd-ae6f-ef9a09d9d888-kube-api-access-22xbf\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046845 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxccx\" (UniqueName: \"kubernetes.io/projected/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-kube-api-access-zxccx\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046943 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.046996 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-log-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.047018 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-scripts\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049266 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049290 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049340 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-ovn-controller-tls-certs\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049390 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-combined-ca-bundle\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049418 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-config\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049821 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.049891 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.108056 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-h2ft9"] Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.115499 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.132149 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h2ft9"] Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151588 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151657 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-log-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151697 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-scripts\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151721 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151745 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151775 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-ovn-controller-tls-certs\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151802 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-combined-ca-bundle\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151822 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.151840 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-config\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152771 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152797 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152838 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152858 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152949 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22xbf\" (UniqueName: \"kubernetes.io/projected/85476d1c-5870-4efd-ae6f-ef9a09d9d888-kube-api-access-22xbf\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.152970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxccx\" (UniqueName: \"kubernetes.io/projected/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-kube-api-access-zxccx\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.159520 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.160402 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-log-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.163042 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-scripts\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.163322 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.171586 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run-ovn\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.172348 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.172484 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85476d1c-5870-4efd-ae6f-ef9a09d9d888-config\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.172622 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-var-run\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.175774 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-ovn-controller-tls-certs\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.179169 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-combined-ca-bundle\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.182139 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxccx\" (UniqueName: \"kubernetes.io/projected/6a8ae1f8-db05-4bc6-a470-60c58ec57f8c-kube-api-access-zxccx\") pod \"ovn-controller-stlzj\" (UID: \"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c\") " pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.183370 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.184362 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85476d1c-5870-4efd-ae6f-ef9a09d9d888-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.184649 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.184676 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0c02258cccd1c4ef6546cd21bd82b785f863f9ffa7e670b4a17ef33d6fcc8c9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.213420 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22xbf\" (UniqueName: \"kubernetes.io/projected/85476d1c-5870-4efd-ae6f-ef9a09d9d888-kube-api-access-22xbf\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.258934 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-run\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.259434 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f9888af-7f4e-4ed5-afb4-b13215010297-scripts\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.259454 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-log\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.259501 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-lib\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.259543 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7td7q\" (UniqueName: \"kubernetes.io/projected/4f9888af-7f4e-4ed5-afb4-b13215010297-kube-api-access-7td7q\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.259586 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-etc-ovs\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.305832 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a548634-b756-4315-b3b2-73e3f3bea6fa\") pod \"ovsdbserver-nb-0\" (UID: \"85476d1c-5870-4efd-ae6f-ef9a09d9d888\") " pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.361387 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.363711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-run\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.363797 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f9888af-7f4e-4ed5-afb4-b13215010297-scripts\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.363821 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-log\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.363971 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-run\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.364313 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-log\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.370293 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f9888af-7f4e-4ed5-afb4-b13215010297-scripts\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.370517 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-lib\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.370659 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7td7q\" (UniqueName: \"kubernetes.io/projected/4f9888af-7f4e-4ed5-afb4-b13215010297-kube-api-access-7td7q\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.370837 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-etc-ovs\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.371310 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-var-lib\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.371433 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f9888af-7f4e-4ed5-afb4-b13215010297-etc-ovs\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.394621 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7td7q\" (UniqueName: \"kubernetes.io/projected/4f9888af-7f4e-4ed5-afb4-b13215010297-kube-api-access-7td7q\") pod \"ovn-controller-ovs-h2ft9\" (UID: \"4f9888af-7f4e-4ed5-afb4-b13215010297\") " pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.441986 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.496748 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.863211 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-879b4584-zh2v7" event={"ID":"6958fe99-167d-43b2-a0e2-e141e980f982","Type":"ContainerStarted","Data":"e54a3d3b0d9b9b2dc5f713747a1dab2522f01f86d1c04050998494d2dd3ad6f5"} Feb 26 08:36:16 crc kubenswrapper[4741]: I0226 08:36:16.899290 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-879b4584-zh2v7" podStartSLOduration=3.899247756 podStartE2EDuration="3.899247756s" podCreationTimestamp="2026-02-26 08:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:36:16.884624782 +0000 UTC m=+1411.880562169" watchObservedRunningTime="2026-02-26 08:36:16.899247756 +0000 UTC m=+1411.895185143" Feb 26 08:36:17 crc kubenswrapper[4741]: I0226 08:36:17.193225 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-stlzj"] Feb 26 08:36:17 crc kubenswrapper[4741]: W0226 08:36:17.213824 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a8ae1f8_db05_4bc6_a470_60c58ec57f8c.slice/crio-c6f886f5e5eb8a6d4319e36aa42aca00b0586ef29534a3bf09e51435ba96f240 WatchSource:0}: Error finding container c6f886f5e5eb8a6d4319e36aa42aca00b0586ef29534a3bf09e51435ba96f240: Status 404 returned error can't find the container with id c6f886f5e5eb8a6d4319e36aa42aca00b0586ef29534a3bf09e51435ba96f240 Feb 26 08:36:17 crc kubenswrapper[4741]: I0226 08:36:17.944577 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-stlzj" event={"ID":"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c","Type":"ContainerStarted","Data":"c6f886f5e5eb8a6d4319e36aa42aca00b0586ef29534a3bf09e51435ba96f240"} Feb 26 08:36:18 crc kubenswrapper[4741]: I0226 08:36:18.529675 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 08:36:18 crc kubenswrapper[4741]: I0226 08:36:18.961054 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85476d1c-5870-4efd-ae6f-ef9a09d9d888","Type":"ContainerStarted","Data":"08f9b87309dd366e6f7fd6f42280cb328deea4eeb0616c861fb04e4a6f7c99f1"} Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.056635 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.059465 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.066082 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.066406 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.067738 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6thkx" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.068152 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.099588 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170625 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170679 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170766 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170804 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdb7z\" (UniqueName: \"kubernetes.io/projected/3d649a1f-19db-4b0d-8162-aec7e405ccb4-kube-api-access-rdb7z\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170871 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.170987 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.171020 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.171221 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.233516 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h2ft9"] Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273464 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273547 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273569 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273601 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273625 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdb7z\" (UniqueName: \"kubernetes.io/projected/3d649a1f-19db-4b0d-8162-aec7e405ccb4-kube-api-access-rdb7z\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273680 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.273746 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.274863 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.275358 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.275696 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d649a1f-19db-4b0d-8162-aec7e405ccb4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.278535 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.278584 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a0e9759790bc7188e7b1ac8cb73bc8f16eded945d42053bd57eb02e1adafc765/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.282844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.296829 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.296975 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdb7z\" (UniqueName: \"kubernetes.io/projected/3d649a1f-19db-4b0d-8162-aec7e405ccb4-kube-api-access-rdb7z\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.319394 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d649a1f-19db-4b0d-8162-aec7e405ccb4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.351536 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbd25898-e8a6-4cb6-9fcd-c2c18bbe321a\") pod \"ovsdbserver-sb-0\" (UID: \"3d649a1f-19db-4b0d-8162-aec7e405ccb4\") " pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:19 crc kubenswrapper[4741]: I0226 08:36:19.398842 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 08:36:23 crc kubenswrapper[4741]: I0226 08:36:23.857231 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:23 crc kubenswrapper[4741]: I0226 08:36:23.858123 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:23 crc kubenswrapper[4741]: I0226 08:36:23.863534 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:24 crc kubenswrapper[4741]: I0226 08:36:24.037097 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-879b4584-zh2v7" Feb 26 08:36:24 crc kubenswrapper[4741]: I0226 08:36:24.103826 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:36:25 crc kubenswrapper[4741]: I0226 08:36:25.148689 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:36:25 crc kubenswrapper[4741]: I0226 08:36:25.148909 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:36:25 crc kubenswrapper[4741]: I0226 08:36:25.148959 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:36:25 crc kubenswrapper[4741]: I0226 08:36:25.149896 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:36:25 crc kubenswrapper[4741]: I0226 08:36:25.149974 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8" gracePeriod=600 Feb 26 08:36:28 crc kubenswrapper[4741]: W0226 08:36:28.647441 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9888af_7f4e_4ed5_afb4_b13215010297.slice/crio-70da2006931d20458fd11425adcb61ef63a9629a06d86f1d391f87cdabbd5bef WatchSource:0}: Error finding container 70da2006931d20458fd11425adcb61ef63a9629a06d86f1d391f87cdabbd5bef: Status 404 returned error can't find the container with id 70da2006931d20458fd11425adcb61ef63a9629a06d86f1d391f87cdabbd5bef Feb 26 08:36:29 crc kubenswrapper[4741]: I0226 08:36:29.085401 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8" exitCode=0 Feb 26 08:36:29 crc kubenswrapper[4741]: I0226 08:36:29.085796 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8"} Feb 26 08:36:29 crc kubenswrapper[4741]: I0226 08:36:29.086033 4741 scope.go:117] "RemoveContainer" containerID="2002cb3f72e48e911f95f750897f0b9b646f0cc9cd35a0939515422d73baaa0a" Feb 26 08:36:29 crc kubenswrapper[4741]: I0226 08:36:29.089478 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h2ft9" event={"ID":"4f9888af-7f4e-4ed5-afb4-b13215010297","Type":"ContainerStarted","Data":"70da2006931d20458fd11425adcb61ef63a9629a06d86f1d391f87cdabbd5bef"} Feb 26 08:36:36 crc kubenswrapper[4741]: E0226 08:36:36.477596 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Feb 26 08:36:36 crc kubenswrapper[4741]: E0226 08:36:36.478397 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5jrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-8jc6t_openshift-operators(432351e8-6adb-434f-b110-c141a4123d2c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:36:36 crc kubenswrapper[4741]: E0226 08:36:36.479605 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" podUID="432351e8-6adb-434f-b110-c141a4123d2c" Feb 26 08:36:37 crc kubenswrapper[4741]: E0226 08:36:37.175722 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" podUID="432351e8-6adb-434f-b110-c141a4123d2c" Feb 26 08:36:49 crc kubenswrapper[4741]: I0226 08:36:49.166076 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bcfd56675-fs5kg" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" containerID="cri-o://15b7c5f850cd6c4cf26f64e2f1fca1664b43778bb02a1b8e40306ce3a5b21280" gracePeriod=15 Feb 26 08:36:50 crc kubenswrapper[4741]: I0226 08:36:50.309074 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bcfd56675-fs5kg_7920dbc4-e9a6-40ab-b766-2546575f2014/console/0.log" Feb 26 08:36:50 crc kubenswrapper[4741]: I0226 08:36:50.309565 4741 generic.go:334] "Generic (PLEG): container finished" podID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerID="15b7c5f850cd6c4cf26f64e2f1fca1664b43778bb02a1b8e40306ce3a5b21280" exitCode=2 Feb 26 08:36:50 crc kubenswrapper[4741]: I0226 08:36:50.309611 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bcfd56675-fs5kg" event={"ID":"7920dbc4-e9a6-40ab-b766-2546575f2014","Type":"ContainerDied","Data":"15b7c5f850cd6c4cf26f64e2f1fca1664b43778bb02a1b8e40306ce3a5b21280"} Feb 26 08:36:50 crc kubenswrapper[4741]: I0226 08:36:50.365332 4741 patch_prober.go:28] interesting pod/console-7bcfd56675-fs5kg container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.95:8443/health\": dial tcp 10.217.0.95:8443: connect: connection refused" start-of-body= Feb 26 08:36:50 crc kubenswrapper[4741]: I0226 08:36:50.365414 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7bcfd56675-fs5kg" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" probeResult="failure" output="Get \"https://10.217.0.95:8443/health\": dial tcp 10.217.0.95:8443: connect: connection refused" Feb 26 08:36:52 crc kubenswrapper[4741]: E0226 08:36:52.772968 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 26 08:36:52 crc kubenswrapper[4741]: E0226 08:36:52.774479 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzrgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(1e870a59-585e-4369-88d5-644e5034ad33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:36:52 crc kubenswrapper[4741]: E0226 08:36:52.776183 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" Feb 26 08:36:53 crc kubenswrapper[4741]: E0226 08:36:53.350086 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" Feb 26 08:36:59 crc kubenswrapper[4741]: I0226 08:36:59.852731 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bkphx"] Feb 26 08:36:59 crc kubenswrapper[4741]: I0226 08:36:59.855394 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:36:59 crc kubenswrapper[4741]: I0226 08:36:59.859062 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 08:36:59 crc kubenswrapper[4741]: I0226 08:36:59.883660 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bkphx"] Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.011427 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-combined-ca-bundle\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.012015 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovn-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.012102 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovs-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.012276 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c14e41-587f-4290-9133-6a3f89e43d86-config\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.012412 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7bq\" (UniqueName: \"kubernetes.io/projected/64c14e41-587f-4290-9133-6a3f89e43d86-kube-api-access-sq7bq\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.012611 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.100680 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115072 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115209 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-combined-ca-bundle\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115303 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovn-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115336 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovs-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115392 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c14e41-587f-4290-9133-6a3f89e43d86-config\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.115428 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7bq\" (UniqueName: \"kubernetes.io/projected/64c14e41-587f-4290-9133-6a3f89e43d86-kube-api-access-sq7bq\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.117272 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovn-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.117446 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/64c14e41-587f-4290-9133-6a3f89e43d86-ovs-rundir\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.117955 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c14e41-587f-4290-9133-6a3f89e43d86-config\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.122209 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.124906 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c14e41-587f-4290-9133-6a3f89e43d86-combined-ca-bundle\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.143081 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7bq\" (UniqueName: \"kubernetes.io/projected/64c14e41-587f-4290-9133-6a3f89e43d86-kube-api-access-sq7bq\") pod \"ovn-controller-metrics-bkphx\" (UID: \"64c14e41-587f-4290-9133-6a3f89e43d86\") " pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.154390 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.156471 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.160192 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.185059 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bkphx" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.186404 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.218547 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lt4b\" (UniqueName: \"kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.218855 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.219305 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.219408 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.322018 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lt4b\" (UniqueName: \"kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.322220 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.322284 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.322322 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.323322 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.323359 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.323662 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.352666 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lt4b\" (UniqueName: \"kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b\") pod \"dnsmasq-dns-7fd796d7df-sz6p9\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:00 crc kubenswrapper[4741]: I0226 08:37:00.517292 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:01 crc kubenswrapper[4741]: I0226 08:37:01.365248 4741 patch_prober.go:28] interesting pod/console-7bcfd56675-fs5kg container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.95:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:37:01 crc kubenswrapper[4741]: I0226 08:37:01.365751 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7bcfd56675-fs5kg" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" probeResult="failure" output="Get \"https://10.217.0.95:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:37:11 crc kubenswrapper[4741]: I0226 08:37:11.364490 4741 patch_prober.go:28] interesting pod/console-7bcfd56675-fs5kg container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.95:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 08:37:11 crc kubenswrapper[4741]: I0226 08:37:11.365483 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7bcfd56675-fs5kg" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" probeResult="failure" output="Get \"https://10.217.0.95:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:37:11 crc kubenswrapper[4741]: I0226 08:37:11.365591 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.114234 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.115590 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7h67fh5f9h695h55dh65h5d5h95hdbh5b7h558h9fh666h56fh556h78h87h654h54h5f6h598h698h5c6h54ch547h576h54ch87h58h676h675h569q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22xbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(85476d1c-5870-4efd-ae6f-ef9a09d9d888): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4\": context canceled" logger="UnhandledError" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.264299 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.264538 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv7ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(26969fe6-2bb9-4f23-8c49-d9d359763da3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.265887 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" Feb 26 08:37:13 crc kubenswrapper[4741]: E0226 08:37:13.560733 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.863382 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.863980 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6j54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(2b1496b8-9f14-472d-af02-7357f75ba7cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.865180 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.866197 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.866375 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rjcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(815578f6-90b1-4afc-91c7-d24a59a11b23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:15 crc kubenswrapper[4741]: E0226 08:37:15.867560 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" Feb 26 08:37:16 crc kubenswrapper[4741]: E0226 08:37:16.601279 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" Feb 26 08:37:16 crc kubenswrapper[4741]: E0226 08:37:16.601543 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" Feb 26 08:37:16 crc kubenswrapper[4741]: E0226 08:37:16.659203 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 26 08:37:16 crc kubenswrapper[4741]: E0226 08:37:16.659525 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n657h64ch9ch7ch54bh599hbfhd9hf7h58h54dh67fh597h68dh5c4h59ch568h678h5d6h55ch566h647h94h57chf9h5c9hc5hdfh676h5f7h5d9h5b4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rl98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d7ee427f-ada6-4496-a314-c5cd63abefcd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:16 crc kubenswrapper[4741]: E0226 08:37:16.660763 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d7ee427f-ada6-4496-a314-c5cd63abefcd" Feb 26 08:37:17 crc kubenswrapper[4741]: E0226 08:37:17.614854 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="d7ee427f-ada6-4496-a314-c5cd63abefcd" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.365040 4741 patch_prober.go:28] interesting pod/console-7bcfd56675-fs5kg container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.95:8443/health\": dial tcp 10.217.0.95:8443: i/o timeout" start-of-body= Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.365586 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7bcfd56675-fs5kg" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" probeResult="failure" output="Get \"https://10.217.0.95:8443/health\": dial tcp 10.217.0.95:8443: i/o timeout" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.780836 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.781535 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(d20c309e-9b10-446d-a7f7-8aad2bdecfc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.782989 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.811620 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bcfd56675-fs5kg_7920dbc4-e9a6-40ab-b766-2546575f2014/console/0.log" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.811723 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.891987 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.892195 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25b2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(ed8ae863-261b-4cbd-945a-b79c99fa0a9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:21 crc kubenswrapper[4741]: E0226 08:37:21.893355 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952153 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952248 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952315 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952368 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952442 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952542 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vc5\" (UniqueName: \"kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.952594 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert\") pod \"7920dbc4-e9a6-40ab-b766-2546575f2014\" (UID: \"7920dbc4-e9a6-40ab-b766-2546575f2014\") " Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.953327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca" (OuterVolumeSpecName: "service-ca") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.954724 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.955369 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.955630 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config" (OuterVolumeSpecName: "console-config") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.959043 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.959874 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:37:21 crc kubenswrapper[4741]: I0226 08:37:21.960094 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5" (OuterVolumeSpecName: "kube-api-access-w7vc5") pod "7920dbc4-e9a6-40ab-b766-2546575f2014" (UID: "7920dbc4-e9a6-40ab-b766-2546575f2014"). InnerVolumeSpecName "kube-api-access-w7vc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056091 4741 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056165 4741 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7920dbc4-e9a6-40ab-b766-2546575f2014-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056177 4741 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056188 4741 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056197 4741 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056206 4741 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7920dbc4-e9a6-40ab-b766-2546575f2014-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.056215 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7vc5\" (UniqueName: \"kubernetes.io/projected/7920dbc4-e9a6-40ab-b766-2546575f2014-kube-api-access-w7vc5\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.423567 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.423834 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6csx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(403c217b-d3d9-47a3-8a5a-4f6e917edcad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.425034 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.672146 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bcfd56675-fs5kg_7920dbc4-e9a6-40ab-b766-2546575f2014/console/0.log" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.672282 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bcfd56675-fs5kg" event={"ID":"7920dbc4-e9a6-40ab-b766-2546575f2014","Type":"ContainerDied","Data":"e0e52e64b6f14257197ce3029ca53805f49bbadd709669b8122aef4ac1e90e6e"} Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.672332 4741 scope.go:117] "RemoveContainer" containerID="15b7c5f850cd6c4cf26f64e2f1fca1664b43778bb02a1b8e40306ce3a5b21280" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.672292 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bcfd56675-fs5kg" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.688820 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.690129 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" Feb 26 08:37:22 crc kubenswrapper[4741]: E0226 08:37:22.690133 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.826475 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:37:22 crc kubenswrapper[4741]: I0226 08:37:22.841629 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bcfd56675-fs5kg"] Feb 26 08:37:23 crc kubenswrapper[4741]: I0226 08:37:23.807277 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" path="/var/lib/kubelet/pods/7920dbc4-e9a6-40ab-b766-2546575f2014/volumes" Feb 26 08:37:27 crc kubenswrapper[4741]: E0226 08:37:27.531269 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 26 08:37:27 crc kubenswrapper[4741]: E0226 08:37:27.532182 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h85h68ch5d4h555h69h5c5h59bhcdh567h676h65bh4h578h649hfh65fh679hbch699h7fh686h56bh7dh656h5c8h5b5h68fh6hf8hd8h668q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7td7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-h2ft9_openstack(4f9888af-7f4e-4ed5-afb4-b13215010297): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:27 crc kubenswrapper[4741]: E0226 08:37:27.533663 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-h2ft9" podUID="4f9888af-7f4e-4ed5-afb4-b13215010297" Feb 26 08:37:27 crc kubenswrapper[4741]: E0226 08:37:27.739476 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-h2ft9" podUID="4f9888af-7f4e-4ed5-afb4-b13215010297" Feb 26 08:37:28 crc kubenswrapper[4741]: E0226 08:37:28.381489 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 26 08:37:28 crc kubenswrapper[4741]: E0226 08:37:28.382017 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h85h68ch5d4h555h69h5c5h59bhcdh567h676h65bh4h578h649hfh65fh679hbch699h7fh686h56bh7dh656h5c8h5b5h68fh6hf8hd8h668q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-stlzj_openstack(6a8ae1f8-db05-4bc6-a470-60c58ec57f8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:28 crc kubenswrapper[4741]: E0226 08:37:28.383287 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" Feb 26 08:37:28 crc kubenswrapper[4741]: E0226 08:37:28.751164 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" Feb 26 08:37:29 crc kubenswrapper[4741]: I0226 08:37:29.478067 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 08:37:31 crc kubenswrapper[4741]: E0226 08:37:31.095522 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 08:37:31 crc kubenswrapper[4741]: E0226 08:37:31.096233 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd9s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-krvtm_openstack(ce3a54c1-256c-4209-bbe4-07b9d62be849): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:31 crc kubenswrapper[4741]: E0226 08:37:31.097479 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" podUID="ce3a54c1-256c-4209-bbe4-07b9d62be849" Feb 26 08:37:31 crc kubenswrapper[4741]: I0226 08:37:31.550556 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.318253 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.319857 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-g29mj_openstack(c8319c09-a1ef-4694-a3f2-ae3d49ab3e53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.321143 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" podUID="c8319c09-a1ef-4694-a3f2-ae3d49ab3e53" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.484487 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.485436 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.485463 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.485729 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7920dbc4-e9a6-40ab-b766-2546575f2014" containerName="console" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.488443 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.498297 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.575982 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kz4j\" (UniqueName: \"kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.576553 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.576652 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.679127 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.679210 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.679302 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kz4j\" (UniqueName: \"kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.679799 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.679784 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.680378 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.680590 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvp9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-v5j5l_openstack(9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.682508 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" podUID="9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.717601 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kz4j\" (UniqueName: \"kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j\") pod \"redhat-operators-7mjbs\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: I0226 08:37:32.853743 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.982063 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.982323 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6gqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8xgsf_openstack(97d8e026-9359-4dfe-b9b0-01857c576cc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:32 crc kubenswrapper[4741]: E0226 08:37:32.984250 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" podUID="97d8e026-9359-4dfe-b9b0-01857c576cc5" Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.696348 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bkphx"] Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.801315 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.835451 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333"} Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.840614 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d649a1f-19db-4b0d-8162-aec7e405ccb4","Type":"ContainerStarted","Data":"fce9cadf81202bc9253e71a10f4786fa64850de54fd0a7366c1936d8f5d67407"} Feb 26 08:37:33 crc kubenswrapper[4741]: E0226 08:37:33.843180 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" podUID="97d8e026-9359-4dfe-b9b0-01857c576cc5" Feb 26 08:37:33 crc kubenswrapper[4741]: W0226 08:37:33.870699 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c14e41_587f_4290_9133_6a3f89e43d86.slice/crio-40b9a5f077f8f0c61dd7d1f366eb3c87a198a7e145491092b0ef8fc78c1e7090 WatchSource:0}: Error finding container 40b9a5f077f8f0c61dd7d1f366eb3c87a198a7e145491092b0ef8fc78c1e7090: Status 404 returned error can't find the container with id 40b9a5f077f8f0c61dd7d1f366eb3c87a198a7e145491092b0ef8fc78c1e7090 Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.971047 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.984567 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:37:33 crc kubenswrapper[4741]: I0226 08:37:33.993233 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:37:34 crc kubenswrapper[4741]: E0226 08:37:34.010549 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Feb 26 08:37:34 crc kubenswrapper[4741]: E0226 08:37:34.011032 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nf7h67fh5f9h695h55dh65h5d5h95hdbh5b7h558h9fh666h56fh556h78h87h654h54h5f6h598h698h5c6h54ch547h576h54ch87h58h676h675h569q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22xbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(85476d1c-5870-4efd-ae6f-ef9a09d9d888): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:37:34 crc kubenswrapper[4741]: E0226 08:37:34.012233 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:b6aa816fadd272044ee44c055c677121fc5a92123601073944b4969d57a952d4\\\": context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-nb-0" podUID="85476d1c-5870-4efd-ae6f-ef9a09d9d888" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.025785 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvp9t\" (UniqueName: \"kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t\") pod \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.025907 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc\") pod \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.025973 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9s4\" (UniqueName: \"kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4\") pod \"ce3a54c1-256c-4209-bbe4-07b9d62be849\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026099 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc\") pod \"ce3a54c1-256c-4209-bbe4-07b9d62be849\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026291 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbhph\" (UniqueName: \"kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph\") pod \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026327 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config\") pod \"ce3a54c1-256c-4209-bbe4-07b9d62be849\" (UID: \"ce3a54c1-256c-4209-bbe4-07b9d62be849\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026492 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config\") pod \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\" (UID: \"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026584 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config\") pod \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\" (UID: \"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53\") " Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.026884 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37" (UID: "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.027459 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config" (OuterVolumeSpecName: "config") pod "ce3a54c1-256c-4209-bbe4-07b9d62be849" (UID: "ce3a54c1-256c-4209-bbe4-07b9d62be849"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.027483 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.027825 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config" (OuterVolumeSpecName: "config") pod "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37" (UID: "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.028124 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce3a54c1-256c-4209-bbe4-07b9d62be849" (UID: "ce3a54c1-256c-4209-bbe4-07b9d62be849"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.028302 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config" (OuterVolumeSpecName: "config") pod "c8319c09-a1ef-4694-a3f2-ae3d49ab3e53" (UID: "c8319c09-a1ef-4694-a3f2-ae3d49ab3e53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.040603 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t" (OuterVolumeSpecName: "kube-api-access-kvp9t") pod "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37" (UID: "9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37"). InnerVolumeSpecName "kube-api-access-kvp9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.040754 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4" (OuterVolumeSpecName: "kube-api-access-qd9s4") pod "ce3a54c1-256c-4209-bbe4-07b9d62be849" (UID: "ce3a54c1-256c-4209-bbe4-07b9d62be849"). InnerVolumeSpecName "kube-api-access-qd9s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.044677 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph" (OuterVolumeSpecName: "kube-api-access-tbhph") pod "c8319c09-a1ef-4694-a3f2-ae3d49ab3e53" (UID: "c8319c09-a1ef-4694-a3f2-ae3d49ab3e53"). InnerVolumeSpecName "kube-api-access-tbhph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130002 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9s4\" (UniqueName: \"kubernetes.io/projected/ce3a54c1-256c-4209-bbe4-07b9d62be849-kube-api-access-qd9s4\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130055 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130066 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbhph\" (UniqueName: \"kubernetes.io/projected/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-kube-api-access-tbhph\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130079 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3a54c1-256c-4209-bbe4-07b9d62be849-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130088 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130096 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.130121 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvp9t\" (UniqueName: \"kubernetes.io/projected/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37-kube-api-access-kvp9t\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.855873 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" event={"ID":"ce3a54c1-256c-4209-bbe4-07b9d62be849","Type":"ContainerDied","Data":"dc482b77d92a89d50fec025f6731316afb8fe281d228224cdaac4887827e4e9a"} Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.856127 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krvtm" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.858606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bkphx" event={"ID":"64c14e41-587f-4290-9133-6a3f89e43d86","Type":"ContainerStarted","Data":"40b9a5f077f8f0c61dd7d1f366eb3c87a198a7e145491092b0ef8fc78c1e7090"} Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.862130 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" event={"ID":"c8319c09-a1ef-4694-a3f2-ae3d49ab3e53","Type":"ContainerDied","Data":"881d8d26c6f029355b5046a9f1633b38a0aa6cee62091f789da515e972d47dc3"} Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.862256 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g29mj" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.872794 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" event={"ID":"9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37","Type":"ContainerDied","Data":"e75e16a7c3f110f2a400a44dcc2e97083f9762bd96123ce74ef884a3e13c341e"} Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.872838 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5j5l" Feb 26 08:37:34 crc kubenswrapper[4741]: I0226 08:37:34.875079 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" event={"ID":"8004c8c7-3187-4400-bd77-4e7cd0c3dd71","Type":"ContainerStarted","Data":"93a986b2d44410faf9ea8cdf266d4aec16c833f2dbac06064a60a43767e61bf6"} Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.066294 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.089819 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g29mj"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.131203 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.142453 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krvtm"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.215305 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.233264 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5j5l"] Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.802084 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37" path="/var/lib/kubelet/pods/9e7cfc3f-3f88-42ed-87a8-6a726a9c2d37/volumes" Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.803235 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8319c09-a1ef-4694-a3f2-ae3d49ab3e53" path="/var/lib/kubelet/pods/c8319c09-a1ef-4694-a3f2-ae3d49ab3e53/volumes" Feb 26 08:37:35 crc kubenswrapper[4741]: I0226 08:37:35.803711 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3a54c1-256c-4209-bbe4-07b9d62be849" path="/var/lib/kubelet/pods/ce3a54c1-256c-4209-bbe4-07b9d62be849/volumes" Feb 26 08:37:35 crc kubenswrapper[4741]: E0226 08:37:35.885059 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 26 08:37:35 crc kubenswrapper[4741]: E0226 08:37:35.885394 4741 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 26 08:37:35 crc kubenswrapper[4741]: E0226 08:37:35.885609 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfh68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(90779734-9d35-47b9-ac0b-dbf02e3453a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:37:35 crc kubenswrapper[4741]: E0226 08:37:35.886918 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" Feb 26 08:37:36 crc kubenswrapper[4741]: I0226 08:37:36.344548 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:37:36 crc kubenswrapper[4741]: E0226 08:37:36.911947 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" Feb 26 08:37:37 crc kubenswrapper[4741]: I0226 08:37:37.918761 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerStarted","Data":"bfb7dadaad0f856aca42ce3e7764c790d023815ab1602bc52c98c313076a43e1"} Feb 26 08:37:37 crc kubenswrapper[4741]: I0226 08:37:37.922217 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" event={"ID":"432351e8-6adb-434f-b110-c141a4123d2c","Type":"ContainerStarted","Data":"78db81af35e65bd81b1ce7b96dbcf5f81e7e674b58feef2b03ca871e7b03f8b2"} Feb 26 08:37:37 crc kubenswrapper[4741]: I0226 08:37:37.952836 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8jc6t" podStartSLOduration=11.78014398 podStartE2EDuration="1m25.952805349s" podCreationTimestamp="2026-02-26 08:36:12 +0000 UTC" firstStartedPulling="2026-02-26 08:36:14.663519752 +0000 UTC m=+1409.659457149" lastFinishedPulling="2026-02-26 08:37:28.836181131 +0000 UTC m=+1483.832118518" observedRunningTime="2026-02-26 08:37:37.939943585 +0000 UTC m=+1492.935880982" watchObservedRunningTime="2026-02-26 08:37:37.952805349 +0000 UTC m=+1492.948742736" Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.933501 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bkphx" event={"ID":"64c14e41-587f-4290-9133-6a3f89e43d86","Type":"ContainerStarted","Data":"c616aefb39d8f060dec94c98b234965470d7ab85052de2d00aa6069ddd004753"} Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.935410 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d7ee427f-ada6-4496-a314-c5cd63abefcd","Type":"ContainerStarted","Data":"13a813573c6e5b0ba5a15d6b7fd4af90c7921de4a05f8f3890f20b211902cda8"} Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.935606 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.937241 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerID="3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb" exitCode=0 Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.937316 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerDied","Data":"3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb"} Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.939228 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b1496b8-9f14-472d-af02-7357f75ba7cf","Type":"ContainerStarted","Data":"706d4bd250ef9936f1caa0eb3f62e38a8cfd0c8280c0791422a11ccb089c95dc"} Feb 26 08:37:38 crc kubenswrapper[4741]: I0226 08:37:38.956310 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bkphx" podStartSLOduration=37.610590595 podStartE2EDuration="39.956277299s" podCreationTimestamp="2026-02-26 08:36:59 +0000 UTC" firstStartedPulling="2026-02-26 08:37:33.915827135 +0000 UTC m=+1488.911764522" lastFinishedPulling="2026-02-26 08:37:36.261513829 +0000 UTC m=+1491.257451226" observedRunningTime="2026-02-26 08:37:38.951869444 +0000 UTC m=+1493.947806851" watchObservedRunningTime="2026-02-26 08:37:38.956277299 +0000 UTC m=+1493.952214696" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.069697 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.750018871 podStartE2EDuration="1m30.069661777s" podCreationTimestamp="2026-02-26 08:36:09 +0000 UTC" firstStartedPulling="2026-02-26 08:36:10.580208011 +0000 UTC m=+1405.576145398" lastFinishedPulling="2026-02-26 08:37:35.899850927 +0000 UTC m=+1490.895788304" observedRunningTime="2026-02-26 08:37:39.049746573 +0000 UTC m=+1494.045683980" watchObservedRunningTime="2026-02-26 08:37:39.069661777 +0000 UTC m=+1494.065599164" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.563058 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.596232 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.598971 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.612572 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.644737 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.799772 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npkpm\" (UniqueName: \"kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.799866 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.799928 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.799969 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.800329 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.903088 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npkpm\" (UniqueName: \"kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.912903 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.913159 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.913255 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.913602 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.915671 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.917515 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.917545 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.918225 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:39 crc kubenswrapper[4741]: I0226 08:37:39.951828 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npkpm\" (UniqueName: \"kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm\") pod \"dnsmasq-dns-86db49b7ff-ks9jp\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.009286 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85476d1c-5870-4efd-ae6f-ef9a09d9d888","Type":"ContainerStarted","Data":"83a477567e55d76020ab56c52524a97187afdc6834a5e9ded297f10c1813a3ba"} Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.019763 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d649a1f-19db-4b0d-8162-aec7e405ccb4","Type":"ContainerStarted","Data":"ae99765a8864bb5e97b63dfa9d22de015b039dff8ddf472340bc5bd82485c7ab"} Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.022862 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerStarted","Data":"8c616e107c839e4915f447057601903ce753a5701dce65084e24fa53495807e7"} Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.035580 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ed8ae863-261b-4cbd-945a-b79c99fa0a9f","Type":"ContainerStarted","Data":"03b0841a99ba4b0fd9f597780550f670b4313a79053b26d4168b8b0f27c79411"} Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.055513 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerStarted","Data":"6dd5410e6ea19da91c248be084d0673d3300b2ab87b7a41e69fd4beec4aa2e91"} Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.558997 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.577601 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.639135 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config\") pod \"97d8e026-9359-4dfe-b9b0-01857c576cc5\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.639257 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc\") pod \"97d8e026-9359-4dfe-b9b0-01857c576cc5\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.639373 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6gqc\" (UniqueName: \"kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc\") pod \"97d8e026-9359-4dfe-b9b0-01857c576cc5\" (UID: \"97d8e026-9359-4dfe-b9b0-01857c576cc5\") " Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.639843 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config" (OuterVolumeSpecName: "config") pod "97d8e026-9359-4dfe-b9b0-01857c576cc5" (UID: "97d8e026-9359-4dfe-b9b0-01857c576cc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.640465 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.641426 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97d8e026-9359-4dfe-b9b0-01857c576cc5" (UID: "97d8e026-9359-4dfe-b9b0-01857c576cc5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.648259 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc" (OuterVolumeSpecName: "kube-api-access-q6gqc") pod "97d8e026-9359-4dfe-b9b0-01857c576cc5" (UID: "97d8e026-9359-4dfe-b9b0-01857c576cc5"). InnerVolumeSpecName "kube-api-access-q6gqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.742741 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97d8e026-9359-4dfe-b9b0-01857c576cc5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:40 crc kubenswrapper[4741]: I0226 08:37:40.742778 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6gqc\" (UniqueName: \"kubernetes.io/projected/97d8e026-9359-4dfe-b9b0-01857c576cc5-kube-api-access-q6gqc\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.066709 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerStarted","Data":"40077b640f4292247aae5f8f0827b3ba522775716ed561f8bb53c5d384790948"} Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.068823 4741 generic.go:334] "Generic (PLEG): container finished" podID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerID="2b4202bdab92e0cd0a5aace3903a8ed81e412312c1b164e763aa35f3146f1b3f" exitCode=0 Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.068929 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" event={"ID":"8004c8c7-3187-4400-bd77-4e7cd0c3dd71","Type":"ContainerDied","Data":"2b4202bdab92e0cd0a5aace3903a8ed81e412312c1b164e763aa35f3146f1b3f"} Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.070688 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" event={"ID":"97d8e026-9359-4dfe-b9b0-01857c576cc5","Type":"ContainerDied","Data":"110bd7dabf809866f130c5e44e592344b19c7016b33062f6891a94ca8b9b9c96"} Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.070786 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8xgsf" Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.162966 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.727423 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.746364 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8xgsf"] Feb 26 08:37:41 crc kubenswrapper[4741]: I0226 08:37:41.807165 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d8e026-9359-4dfe-b9b0-01857c576cc5" path="/var/lib/kubelet/pods/97d8e026-9359-4dfe-b9b0-01857c576cc5/volumes" Feb 26 08:37:42 crc kubenswrapper[4741]: I0226 08:37:42.084885 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerStarted","Data":"d54fba0c2e69ad6900f8bd7accf15cfa3283f5a168234d1aeac98772772fac2d"} Feb 26 08:37:42 crc kubenswrapper[4741]: I0226 08:37:42.086660 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerStarted","Data":"1d5c9d1f24b9ef8be4bceadba12646b88d092e81a4e46b4d6d132ec83d69c54e"} Feb 26 08:37:42 crc kubenswrapper[4741]: I0226 08:37:42.088123 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerStarted","Data":"10c025245d1919b81b16d3a4063d71f4e69aef927a2e25d87e38b1cb026aa792"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.102089 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85476d1c-5870-4efd-ae6f-ef9a09d9d888","Type":"ContainerStarted","Data":"8eb7a9a6b050769781284559831e82465de657162c489260bc8d7c9585856d9c"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.104451 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d649a1f-19db-4b0d-8162-aec7e405ccb4","Type":"ContainerStarted","Data":"e8bfdfbf0b11e79af04b1761428561acb1459ad321b91ece8ab91d70a582c1b3"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.106289 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerStarted","Data":"1ff0bb213ea948ef68c5ba3194a26ca0feeeaa65c6d1e9c948a5449dc2d883df"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.109006 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerStarted","Data":"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.111455 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" event={"ID":"8004c8c7-3187-4400-bd77-4e7cd0c3dd71","Type":"ContainerStarted","Data":"53bb994d615ae6b4258c96569adfe6f2e43c52ef82c88c4d4b74eb091e6f92ae"} Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.112239 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:43 crc kubenswrapper[4741]: I0226 08:37:43.211978 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" podStartSLOduration=40.114161878 podStartE2EDuration="43.21194656s" podCreationTimestamp="2026-02-26 08:37:00 +0000 UTC" firstStartedPulling="2026-02-26 08:37:33.923595695 +0000 UTC m=+1488.919533082" lastFinishedPulling="2026-02-26 08:37:37.021380377 +0000 UTC m=+1492.017317764" observedRunningTime="2026-02-26 08:37:43.191386978 +0000 UTC m=+1498.187324365" watchObservedRunningTime="2026-02-26 08:37:43.21194656 +0000 UTC m=+1498.207883947" Feb 26 08:37:44 crc kubenswrapper[4741]: I0226 08:37:44.149528 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=82.435834459 podStartE2EDuration="1m26.149500406s" podCreationTimestamp="2026-02-26 08:36:18 +0000 UTC" firstStartedPulling="2026-02-26 08:37:33.30771654 +0000 UTC m=+1488.303653927" lastFinishedPulling="2026-02-26 08:37:37.021382487 +0000 UTC m=+1492.017319874" observedRunningTime="2026-02-26 08:37:44.146916233 +0000 UTC m=+1499.142853630" watchObservedRunningTime="2026-02-26 08:37:44.149500406 +0000 UTC m=+1499.145437793" Feb 26 08:37:44 crc kubenswrapper[4741]: I0226 08:37:44.186218 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.611445822 podStartE2EDuration="1m30.186187364s" podCreationTimestamp="2026-02-26 08:36:14 +0000 UTC" firstStartedPulling="2026-02-26 08:36:18.52032834 +0000 UTC m=+1413.516265727" lastFinishedPulling="2026-02-26 08:37:37.095069892 +0000 UTC m=+1492.091007269" observedRunningTime="2026-02-26 08:37:44.177088726 +0000 UTC m=+1499.173026113" watchObservedRunningTime="2026-02-26 08:37:44.186187364 +0000 UTC m=+1499.182124751" Feb 26 08:37:44 crc kubenswrapper[4741]: I0226 08:37:44.399546 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 08:37:44 crc kubenswrapper[4741]: I0226 08:37:44.684422 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.142164 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerID="488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a" exitCode=0 Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.142237 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerDied","Data":"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a"} Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.399076 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.444169 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.497009 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.497070 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 08:37:46 crc kubenswrapper[4741]: I0226 08:37:46.540077 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.194237 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.199361 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 08:37:47 crc kubenswrapper[4741]: E0226 08:37:47.441155 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbae4b643_626b_412e_b8e7_33844ae9610d.slice/crio-conmon-1ff0bb213ea948ef68c5ba3194a26ca0feeeaa65c6d1e9c948a5449dc2d883df.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.643486 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.659169 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.668533 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.669367 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.669896 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-785ls" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.670355 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.705853 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729161 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729340 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-config\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729390 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-scripts\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729415 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729533 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkps\" (UniqueName: \"kubernetes.io/projected/10d582b9-9e4a-4ce4-8763-addb194c9ced-kube-api-access-ndkps\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729569 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.729603 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.834728 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-config\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.834838 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-scripts\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.834891 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.835087 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkps\" (UniqueName: \"kubernetes.io/projected/10d582b9-9e4a-4ce4-8763-addb194c9ced-kube-api-access-ndkps\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.835199 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.835242 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.835393 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.839045 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.945202 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-config\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.947684 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10d582b9-9e4a-4ce4-8763-addb194c9ced-scripts\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.949783 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.950445 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.951027 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d582b9-9e4a-4ce4-8763-addb194c9ced-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.966668 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkps\" (UniqueName: \"kubernetes.io/projected/10d582b9-9e4a-4ce4-8763-addb194c9ced-kube-api-access-ndkps\") pod \"ovn-northd-0\" (UID: \"10d582b9-9e4a-4ce4-8763-addb194c9ced\") " pod="openstack/ovn-northd-0" Feb 26 08:37:47 crc kubenswrapper[4741]: I0226 08:37:47.985445 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 08:37:48 crc kubenswrapper[4741]: I0226 08:37:48.165549 4741 generic.go:334] "Generic (PLEG): container finished" podID="bae4b643-626b-412e-b8e7-33844ae9610d" containerID="1ff0bb213ea948ef68c5ba3194a26ca0feeeaa65c6d1e9c948a5449dc2d883df" exitCode=0 Feb 26 08:37:48 crc kubenswrapper[4741]: I0226 08:37:48.166010 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerDied","Data":"1ff0bb213ea948ef68c5ba3194a26ca0feeeaa65c6d1e9c948a5449dc2d883df"} Feb 26 08:37:50 crc kubenswrapper[4741]: I0226 08:37:50.188333 4741 generic.go:334] "Generic (PLEG): container finished" podID="1e870a59-585e-4369-88d5-644e5034ad33" containerID="1d5c9d1f24b9ef8be4bceadba12646b88d092e81a4e46b4d6d132ec83d69c54e" exitCode=0 Feb 26 08:37:50 crc kubenswrapper[4741]: I0226 08:37:50.188571 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerDied","Data":"1d5c9d1f24b9ef8be4bceadba12646b88d092e81a4e46b4d6d132ec83d69c54e"} Feb 26 08:37:50 crc kubenswrapper[4741]: I0226 08:37:50.520283 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.176255 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.256469 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.259101 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.289999 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.383199 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.383260 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.383299 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.383381 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.383591 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zbw\" (UniqueName: \"kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.486297 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.486383 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.486442 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.486507 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.486799 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zbw\" (UniqueName: \"kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.524593 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.525318 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.525745 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.526245 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.534475 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zbw\" (UniqueName: \"kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw\") pod \"dnsmasq-dns-698758b865-tqsq4\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:52 crc kubenswrapper[4741]: I0226 08:37:52.595323 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.381575 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.393959 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.399830 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.400151 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.401634 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.427449 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xn7t6" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.446374 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.503257 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mt4js"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.506689 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509682 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b0231b-fbdf-4714-ac14-d3621c8c7807-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509792 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509829 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-lock\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509877 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509900 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtj7\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-kube-api-access-fgtj7\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.509919 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-cache\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.519272 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.519544 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.540607 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.575498 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mt4js"] Feb 26 08:37:53 crc kubenswrapper[4741]: E0226 08:37:53.576664 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-bc7dn ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-bc7dn ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-mt4js" podUID="bef8366d-0f45-48bf-a157-71f3a57fb93c" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.601648 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xwx84"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.603349 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.611931 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.611992 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612021 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612047 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612076 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612120 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-lock\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612152 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7dn\" (UniqueName: \"kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612176 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612195 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612226 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75pnb\" (UniqueName: \"kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612251 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612272 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtj7\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-kube-api-access-fgtj7\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612289 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-cache\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612344 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612365 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612441 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b0231b-fbdf-4714-ac14-d3621c8c7807-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612461 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612505 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612529 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.612554 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: E0226 08:37:53.613479 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:37:53 crc kubenswrapper[4741]: E0226 08:37:53.613518 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:37:53 crc kubenswrapper[4741]: E0226 08:37:53.613577 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:37:54.113553243 +0000 UTC m=+1509.109490630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.613771 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-cache\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.613872 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91b0231b-fbdf-4714-ac14-d3621c8c7807-lock\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.621362 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b0231b-fbdf-4714-ac14-d3621c8c7807-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.625227 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xwx84"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.631605 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.631667 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb4c43166cc0958b2fb15928ed349cea32052d346d169046e80d5be354882fbb/globalmount\"" pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.649188 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mt4js"] Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.661337 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtj7\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-kube-api-access-fgtj7\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.713973 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714039 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714065 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714090 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714130 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714151 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714167 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714220 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7dn\" (UniqueName: \"kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714256 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714279 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75pnb\" (UniqueName: \"kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714353 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714372 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.714429 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.716746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.717047 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.717249 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.717552 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.718248 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.718322 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.718936 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.722264 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.723741 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.725421 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.731932 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.738641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7dn\" (UniqueName: \"kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn\") pod \"swift-ring-rebalance-mt4js\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.746328 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.749820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75pnb\" (UniqueName: \"kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb\") pod \"swift-ring-rebalance-xwx84\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.787092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af3a3291-fdb1-4cee-88fd-8c32e6d65b41\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:53 crc kubenswrapper[4741]: I0226 08:37:53.832150 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.137891 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:54 crc kubenswrapper[4741]: E0226 08:37:54.138269 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:37:54 crc kubenswrapper[4741]: E0226 08:37:54.138317 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:37:54 crc kubenswrapper[4741]: E0226 08:37:54.138422 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:37:55.138387852 +0000 UTC m=+1510.134325239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.240750 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.263171 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.347777 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.348467 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.348660 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7dn\" (UniqueName: \"kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.348830 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349075 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349291 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349418 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349525 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle\") pod \"bef8366d-0f45-48bf-a157-71f3a57fb93c\" (UID: \"bef8366d-0f45-48bf-a157-71f3a57fb93c\") " Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349518 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts" (OuterVolumeSpecName: "scripts") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.349900 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.350715 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.350808 4741 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bef8366d-0f45-48bf-a157-71f3a57fb93c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.350882 4741 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bef8366d-0f45-48bf-a157-71f3a57fb93c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.356797 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.357308 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.357879 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn" (OuterVolumeSpecName: "kube-api-access-bc7dn") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "kube-api-access-bc7dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.364320 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bef8366d-0f45-48bf-a157-71f3a57fb93c" (UID: "bef8366d-0f45-48bf-a157-71f3a57fb93c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.453000 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.453056 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7dn\" (UniqueName: \"kubernetes.io/projected/bef8366d-0f45-48bf-a157-71f3a57fb93c-kube-api-access-bc7dn\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.453072 4741 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.453080 4741 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bef8366d-0f45-48bf-a157-71f3a57fb93c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 08:37:54 crc kubenswrapper[4741]: I0226 08:37:54.835068 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 08:37:55 crc kubenswrapper[4741]: I0226 08:37:55.173725 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:55 crc kubenswrapper[4741]: E0226 08:37:55.173992 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:37:55 crc kubenswrapper[4741]: E0226 08:37:55.174032 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:37:55 crc kubenswrapper[4741]: E0226 08:37:55.174149 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:37:57.174094505 +0000 UTC m=+1512.170031902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:37:55 crc kubenswrapper[4741]: I0226 08:37:55.249920 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mt4js" Feb 26 08:37:55 crc kubenswrapper[4741]: I0226 08:37:55.322294 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mt4js"] Feb 26 08:37:55 crc kubenswrapper[4741]: I0226 08:37:55.338820 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-mt4js"] Feb 26 08:37:55 crc kubenswrapper[4741]: I0226 08:37:55.802647 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef8366d-0f45-48bf-a157-71f3a57fb93c" path="/var/lib/kubelet/pods/bef8366d-0f45-48bf-a157-71f3a57fb93c/volumes" Feb 26 08:37:57 crc kubenswrapper[4741]: I0226 08:37:57.228269 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:37:57 crc kubenswrapper[4741]: E0226 08:37:57.228760 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:37:57 crc kubenswrapper[4741]: E0226 08:37:57.228789 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:37:57 crc kubenswrapper[4741]: E0226 08:37:57.228884 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:38:01.228855098 +0000 UTC m=+1516.224792495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:37:59 crc kubenswrapper[4741]: I0226 08:37:59.289599 4741 generic.go:334] "Generic (PLEG): container finished" podID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerID="706d4bd250ef9936f1caa0eb3f62e38a8cfd0c8280c0791422a11ccb089c95dc" exitCode=0 Feb 26 08:37:59 crc kubenswrapper[4741]: I0226 08:37:59.289839 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b1496b8-9f14-472d-af02-7357f75ba7cf","Type":"ContainerDied","Data":"706d4bd250ef9936f1caa0eb3f62e38a8cfd0c8280c0791422a11ccb089c95dc"} Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.157661 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534918-npnn9"] Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.159701 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.172002 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534918-npnn9"] Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.173658 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.173713 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.174000 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.227369 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj96\" (UniqueName: \"kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96\") pod \"auto-csr-approver-29534918-npnn9\" (UID: \"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83\") " pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.329463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj96\" (UniqueName: \"kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96\") pod \"auto-csr-approver-29534918-npnn9\" (UID: \"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83\") " pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.354016 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj96\" (UniqueName: \"kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96\") pod \"auto-csr-approver-29534918-npnn9\" (UID: \"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83\") " pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:00 crc kubenswrapper[4741]: I0226 08:38:00.490312 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:01 crc kubenswrapper[4741]: W0226 08:38:01.252356 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d582b9_9e4a_4ce4_8763_addb194c9ced.slice/crio-2f787e9d1507c5d8153135d15797103948646697d069901e6c59705af3005643 WatchSource:0}: Error finding container 2f787e9d1507c5d8153135d15797103948646697d069901e6c59705af3005643: Status 404 returned error can't find the container with id 2f787e9d1507c5d8153135d15797103948646697d069901e6c59705af3005643 Feb 26 08:38:01 crc kubenswrapper[4741]: I0226 08:38:01.254665 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:38:01 crc kubenswrapper[4741]: E0226 08:38:01.254893 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:38:01 crc kubenswrapper[4741]: E0226 08:38:01.254917 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:38:01 crc kubenswrapper[4741]: E0226 08:38:01.255001 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:38:09.254978715 +0000 UTC m=+1524.250916102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:38:01 crc kubenswrapper[4741]: I0226 08:38:01.319732 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10d582b9-9e4a-4ce4-8763-addb194c9ced","Type":"ContainerStarted","Data":"2f787e9d1507c5d8153135d15797103948646697d069901e6c59705af3005643"} Feb 26 08:38:08 crc kubenswrapper[4741]: I0226 08:38:08.722410 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:38:09 crc kubenswrapper[4741]: W0226 08:38:09.001653 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8da47daf_0aba_4cf1_bcd5_585a7b3e2b83.slice/crio-2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856 WatchSource:0}: Error finding container 2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856: Status 404 returned error can't find the container with id 2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856 Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.003825 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534918-npnn9"] Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.015917 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xwx84"] Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.287110 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:38:09 crc kubenswrapper[4741]: E0226 08:38:09.287419 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:38:09 crc kubenswrapper[4741]: E0226 08:38:09.287635 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:38:09 crc kubenswrapper[4741]: E0226 08:38:09.287723 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:38:25.287695417 +0000 UTC m=+1540.283632804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.434950 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534918-npnn9" event={"ID":"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83","Type":"ContainerStarted","Data":"2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856"} Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.436738 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerStarted","Data":"7452453b63ef4175fe7c0b7aa2980554b58c2a7490ad76bd17aa737528ae68b4"} Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.437850 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xwx84" event={"ID":"548f1177-df4c-4b50-920f-f5b9ff95c283","Type":"ContainerStarted","Data":"5db09ebc684ded5a0d31745fb6382d187774b7210ef84e4ea59e8333717d4d53"} Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.440762 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerStarted","Data":"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8"} Feb 26 08:38:09 crc kubenswrapper[4741]: I0226 08:38:09.442926 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b1496b8-9f14-472d-af02-7357f75ba7cf","Type":"ContainerStarted","Data":"127722805cb693166baa951703dbc5fe3112c007c422c9fbae2fc407b5d53bf0"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.457115 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerStarted","Data":"8cebb6ecdcb821c3964974b0456d405b8619f9e6a8841ed2eb3f743c5fa01c99"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.459966 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerStarted","Data":"436bbf3c97279880da553ab5b7b81072912816a51cb70b6477c88a69bcb598f2"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.461772 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-stlzj" event={"ID":"6a8ae1f8-db05-4bc6-a470-60c58ec57f8c","Type":"ContainerStarted","Data":"bbb64459135dfd54f276458ea1dd52942fa491b65be5b65a35cb393ba73abe65"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.463328 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90779734-9d35-47b9-ac0b-dbf02e3453a5","Type":"ContainerStarted","Data":"f09a6890b605d00eda41015a682a9896e7936efdd60e9de08b6e772fc7494f14"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.465469 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h2ft9" event={"ID":"4f9888af-7f4e-4ed5-afb4-b13215010297","Type":"ContainerStarted","Data":"1849159ef209894cbbcaf93dfe2d16b2f54e320cbc67ec5f980db7d362b072a7"} Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.506038 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=37.835932745 podStartE2EDuration="2m3.505985704s" podCreationTimestamp="2026-02-26 08:36:07 +0000 UTC" firstStartedPulling="2026-02-26 08:36:10.585961454 +0000 UTC m=+1405.581898841" lastFinishedPulling="2026-02-26 08:37:36.256014413 +0000 UTC m=+1491.251951800" observedRunningTime="2026-02-26 08:38:10.495219369 +0000 UTC m=+1525.491156766" watchObservedRunningTime="2026-02-26 08:38:10.505985704 +0000 UTC m=+1525.501923091" Feb 26 08:38:10 crc kubenswrapper[4741]: I0226 08:38:10.525170 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mjbs" podStartSLOduration=10.202600493 podStartE2EDuration="38.525146106s" podCreationTimestamp="2026-02-26 08:37:32 +0000 UTC" firstStartedPulling="2026-02-26 08:37:38.939505744 +0000 UTC m=+1493.935443131" lastFinishedPulling="2026-02-26 08:38:07.262051357 +0000 UTC m=+1522.257988744" observedRunningTime="2026-02-26 08:38:10.52424461 +0000 UTC m=+1525.520181997" watchObservedRunningTime="2026-02-26 08:38:10.525146106 +0000 UTC m=+1525.521083493" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.480267 4741 generic.go:334] "Generic (PLEG): container finished" podID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerID="8cebb6ecdcb821c3964974b0456d405b8619f9e6a8841ed2eb3f743c5fa01c99" exitCode=0 Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.480860 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="dnsmasq-dns" containerID="cri-o://436bbf3c97279880da553ab5b7b81072912816a51cb70b6477c88a69bcb598f2" gracePeriod=10 Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.482427 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerDied","Data":"8cebb6ecdcb821c3964974b0456d405b8619f9e6a8841ed2eb3f743c5fa01c99"} Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.483234 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.484101 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-stlzj" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.485114 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.578058 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" podStartSLOduration=32.578031224 podStartE2EDuration="32.578031224s" podCreationTimestamp="2026-02-26 08:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:11.554450717 +0000 UTC m=+1526.550388104" watchObservedRunningTime="2026-02-26 08:38:11.578031224 +0000 UTC m=+1526.573968611" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.580404 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.630195148 podStartE2EDuration="2m0.580390371s" podCreationTimestamp="2026-02-26 08:36:11 +0000 UTC" firstStartedPulling="2026-02-26 08:36:13.378324737 +0000 UTC m=+1408.374262124" lastFinishedPulling="2026-02-26 08:38:08.32851996 +0000 UTC m=+1523.324457347" observedRunningTime="2026-02-26 08:38:11.5234266 +0000 UTC m=+1526.519363987" watchObservedRunningTime="2026-02-26 08:38:11.580390371 +0000 UTC m=+1526.576327758" Feb 26 08:38:11 crc kubenswrapper[4741]: I0226 08:38:11.743000 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-stlzj" podStartSLOduration=6.707675193 podStartE2EDuration="1m56.742970061s" podCreationTimestamp="2026-02-26 08:36:15 +0000 UTC" firstStartedPulling="2026-02-26 08:36:17.227013156 +0000 UTC m=+1412.222950543" lastFinishedPulling="2026-02-26 08:38:07.262308014 +0000 UTC m=+1522.258245411" observedRunningTime="2026-02-26 08:38:11.679394222 +0000 UTC m=+1526.675331609" watchObservedRunningTime="2026-02-26 08:38:11.742970061 +0000 UTC m=+1526.738907448" Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.499959 4741 generic.go:334] "Generic (PLEG): container finished" podID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerID="8c616e107c839e4915f447057601903ce753a5701dce65084e24fa53495807e7" exitCode=0 Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.500569 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerDied","Data":"8c616e107c839e4915f447057601903ce753a5701dce65084e24fa53495807e7"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.511044 4741 generic.go:334] "Generic (PLEG): container finished" podID="4f9888af-7f4e-4ed5-afb4-b13215010297" containerID="1849159ef209894cbbcaf93dfe2d16b2f54e320cbc67ec5f980db7d362b072a7" exitCode=0 Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.511172 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h2ft9" event={"ID":"4f9888af-7f4e-4ed5-afb4-b13215010297","Type":"ContainerDied","Data":"1849159ef209894cbbcaf93dfe2d16b2f54e320cbc67ec5f980db7d362b072a7"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.516638 4741 generic.go:334] "Generic (PLEG): container finished" podID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerID="6dd5410e6ea19da91c248be084d0673d3300b2ab87b7a41e69fd4beec4aa2e91" exitCode=0 Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.516756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerDied","Data":"6dd5410e6ea19da91c248be084d0673d3300b2ab87b7a41e69fd4beec4aa2e91"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.523007 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerStarted","Data":"289bc94fd6a05120ecf2e29cabec867f1951a2472cfecdab06eb528d0ab5242c"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.538339 4741 generic.go:334] "Generic (PLEG): container finished" podID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerID="40077b640f4292247aae5f8f0827b3ba522775716ed561f8bb53c5d384790948" exitCode=0 Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.538462 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerDied","Data":"40077b640f4292247aae5f8f0827b3ba522775716ed561f8bb53c5d384790948"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.549399 4741 generic.go:334] "Generic (PLEG): container finished" podID="bae4b643-626b-412e-b8e7-33844ae9610d" containerID="436bbf3c97279880da553ab5b7b81072912816a51cb70b6477c88a69bcb598f2" exitCode=0 Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.549588 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerDied","Data":"436bbf3c97279880da553ab5b7b81072912816a51cb70b6477c88a69bcb598f2"} Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.854776 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:38:12 crc kubenswrapper[4741]: I0226 08:38:12.855285 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.005295 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.125815 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb\") pod \"bae4b643-626b-412e-b8e7-33844ae9610d\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.125927 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config\") pod \"bae4b643-626b-412e-b8e7-33844ae9610d\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.126202 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb\") pod \"bae4b643-626b-412e-b8e7-33844ae9610d\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.126291 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc\") pod \"bae4b643-626b-412e-b8e7-33844ae9610d\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.126348 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npkpm\" (UniqueName: \"kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm\") pod \"bae4b643-626b-412e-b8e7-33844ae9610d\" (UID: \"bae4b643-626b-412e-b8e7-33844ae9610d\") " Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.134064 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm" (OuterVolumeSpecName: "kube-api-access-npkpm") pod "bae4b643-626b-412e-b8e7-33844ae9610d" (UID: "bae4b643-626b-412e-b8e7-33844ae9610d"). InnerVolumeSpecName "kube-api-access-npkpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.184837 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bae4b643-626b-412e-b8e7-33844ae9610d" (UID: "bae4b643-626b-412e-b8e7-33844ae9610d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.190014 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bae4b643-626b-412e-b8e7-33844ae9610d" (UID: "bae4b643-626b-412e-b8e7-33844ae9610d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.191971 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config" (OuterVolumeSpecName: "config") pod "bae4b643-626b-412e-b8e7-33844ae9610d" (UID: "bae4b643-626b-412e-b8e7-33844ae9610d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.202756 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bae4b643-626b-412e-b8e7-33844ae9610d" (UID: "bae4b643-626b-412e-b8e7-33844ae9610d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.229498 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.229540 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.229558 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.229569 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae4b643-626b-412e-b8e7-33844ae9610d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.229579 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npkpm\" (UniqueName: \"kubernetes.io/projected/bae4b643-626b-412e-b8e7-33844ae9610d-kube-api-access-npkpm\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.566624 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerStarted","Data":"6b8e3fb0b791313279051abc8340c71d98ee58f7de4fb3e698b10504d659fde1"} Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.570506 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerStarted","Data":"c422c2b3fa19c77819312a9588b4b0471d4e3fdcbfb3ffd1e889f490d908d6df"} Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.573240 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" event={"ID":"bae4b643-626b-412e-b8e7-33844ae9610d","Type":"ContainerDied","Data":"d54fba0c2e69ad6900f8bd7accf15cfa3283f5a168234d1aeac98772772fac2d"} Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.573352 4741 scope.go:117] "RemoveContainer" containerID="436bbf3c97279880da553ab5b7b81072912816a51cb70b6477c88a69bcb598f2" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.573582 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ks9jp" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.587565 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerStarted","Data":"4031d6b7bcb81112eca030c5fd4613b59b39e890138fdd3df822f24f13b1f901"} Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.587706 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.633637 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podStartSLOduration=21.633605141 podStartE2EDuration="21.633605141s" podCreationTimestamp="2026-02-26 08:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:13.617238668 +0000 UTC m=+1528.613176065" watchObservedRunningTime="2026-02-26 08:38:13.633605141 +0000 UTC m=+1528.629542528" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.680919 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.699481 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ks9jp"] Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.705937 4741 scope.go:117] "RemoveContainer" containerID="1ff0bb213ea948ef68c5ba3194a26ca0feeeaa65c6d1e9c948a5449dc2d883df" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.804216 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" path="/var/lib/kubelet/pods/bae4b643-626b-412e-b8e7-33844ae9610d/volumes" Feb 26 08:38:13 crc kubenswrapper[4741]: I0226 08:38:13.953268 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:38:13 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:38:13 crc kubenswrapper[4741]: > Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.597644 4741 generic.go:334] "Generic (PLEG): container finished" podID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerID="10c025245d1919b81b16d3a4063d71f4e69aef927a2e25d87e38b1cb026aa792" exitCode=0 Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.597751 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerDied","Data":"10c025245d1919b81b16d3a4063d71f4e69aef927a2e25d87e38b1cb026aa792"} Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.601685 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h2ft9" event={"ID":"4f9888af-7f4e-4ed5-afb4-b13215010297","Type":"ContainerStarted","Data":"8dd25b96d6ea8c68c67dc5ad61b6c586c387c64595cae687624280cda922d8ef"} Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.674699 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371908.180105 podStartE2EDuration="2m8.674671034s" podCreationTimestamp="2026-02-26 08:36:06 +0000 UTC" firstStartedPulling="2026-02-26 08:36:09.125516437 +0000 UTC m=+1404.121453824" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:14.663269462 +0000 UTC m=+1529.659206869" watchObservedRunningTime="2026-02-26 08:38:14.674671034 +0000 UTC m=+1529.670608421" Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.711530 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.9304782 podStartE2EDuration="2m9.711498916s" podCreationTimestamp="2026-02-26 08:36:05 +0000 UTC" firstStartedPulling="2026-02-26 08:36:08.598526539 +0000 UTC m=+1403.594463926" lastFinishedPulling="2026-02-26 08:37:34.379547255 +0000 UTC m=+1489.375484642" observedRunningTime="2026-02-26 08:38:14.694021122 +0000 UTC m=+1529.689958519" watchObservedRunningTime="2026-02-26 08:38:14.711498916 +0000 UTC m=+1529.707436303" Feb 26 08:38:14 crc kubenswrapper[4741]: I0226 08:38:14.744364 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=43.31149733 podStartE2EDuration="2m9.744340465s" podCreationTimestamp="2026-02-26 08:36:05 +0000 UTC" firstStartedPulling="2026-02-26 08:36:08.683502615 +0000 UTC m=+1403.679440002" lastFinishedPulling="2026-02-26 08:37:35.11634574 +0000 UTC m=+1490.112283137" observedRunningTime="2026-02-26 08:38:14.729230068 +0000 UTC m=+1529.725167465" watchObservedRunningTime="2026-02-26 08:38:14.744340465 +0000 UTC m=+1529.740277862" Feb 26 08:38:17 crc kubenswrapper[4741]: I0226 08:38:17.597329 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:38:17 crc kubenswrapper[4741]: I0226 08:38:17.676390 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 08:38:17 crc kubenswrapper[4741]: I0226 08:38:17.722237 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 26 08:38:17 crc kubenswrapper[4741]: I0226 08:38:17.736745 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:38:17 crc kubenswrapper[4741]: I0226 08:38:17.737006 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" containerID="cri-o://53bb994d615ae6b4258c96569adfe6f2e43c52ef82c88c4d4b74eb091e6f92ae" gracePeriod=10 Feb 26 08:38:18 crc kubenswrapper[4741]: I0226 08:38:18.295623 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:38:18 crc kubenswrapper[4741]: I0226 08:38:18.724917 4741 generic.go:334] "Generic (PLEG): container finished" podID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerID="53bb994d615ae6b4258c96569adfe6f2e43c52ef82c88c4d4b74eb091e6f92ae" exitCode=0 Feb 26 08:38:18 crc kubenswrapper[4741]: I0226 08:38:18.724977 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" event={"ID":"8004c8c7-3187-4400-bd77-4e7cd0c3dd71","Type":"ContainerDied","Data":"53bb994d615ae6b4258c96569adfe6f2e43c52ef82c88c4d4b74eb091e6f92ae"} Feb 26 08:38:19 crc kubenswrapper[4741]: I0226 08:38:19.598282 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 08:38:19 crc kubenswrapper[4741]: I0226 08:38:19.598361 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 08:38:20 crc kubenswrapper[4741]: I0226 08:38:20.518594 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Feb 26 08:38:22 crc kubenswrapper[4741]: I0226 08:38:22.236154 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 08:38:23 crc kubenswrapper[4741]: I0226 08:38:23.917759 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:38:23 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:38:23 crc kubenswrapper[4741]: > Feb 26 08:38:25 crc kubenswrapper[4741]: I0226 08:38:25.374997 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:38:25 crc kubenswrapper[4741]: E0226 08:38:25.375669 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:38:25 crc kubenswrapper[4741]: E0226 08:38:25.375687 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:38:25 crc kubenswrapper[4741]: E0226 08:38:25.375737 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:38:57.37572206 +0000 UTC m=+1572.371659447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:38:25 crc kubenswrapper[4741]: I0226 08:38:25.518344 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Feb 26 08:38:27 crc kubenswrapper[4741]: E0226 08:38:27.586038 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 26 08:38:27 crc kubenswrapper[4741]: E0226 08:38:27.587529 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzrgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(1e870a59-585e-4369-88d5-644e5034ad33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 08:38:27 crc kubenswrapper[4741]: I0226 08:38:27.669880 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 26 08:38:27 crc kubenswrapper[4741]: I0226 08:38:27.737460 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.171983 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.267333 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.301504 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.138:5671: connect: connection refused" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.302169 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerName="galera" probeResult="failure" output=< Feb 26 08:38:28 crc kubenswrapper[4741]: wsrep_local_state_comment (Joined) differs from Synced Feb 26 08:38:28 crc kubenswrapper[4741]: > Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.356858 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lt4b\" (UniqueName: \"kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b\") pod \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.357031 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config\") pod \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.357151 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc\") pod \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.357168 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb\") pod \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\" (UID: \"8004c8c7-3187-4400-bd77-4e7cd0c3dd71\") " Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.372435 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b" (OuterVolumeSpecName: "kube-api-access-4lt4b") pod "8004c8c7-3187-4400-bd77-4e7cd0c3dd71" (UID: "8004c8c7-3187-4400-bd77-4e7cd0c3dd71"). InnerVolumeSpecName "kube-api-access-4lt4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.407593 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.407819 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffhc4hf9h57h685h688hd7h695h54fh585h64dh686h578h65h68dh589h7dh5b4h5d4h647h64bh5d9h56ch559h5c8h5dch68h55bh677h547h64bh67bq,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n57dhf9h56bh6ch557h5cfh67h97h597h65fh556h55bhbch9bh578h8bh666h8bh64dh8ch9dh6dh5cfh654h565h59h656h5dchbch554h5f5h5b8q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n68fh5fch675h5ddh585h65dhc7h8fh5b4h677h5c5h65fhf9h84h68fh695h5f7h558h66dh87hb5h676h69h86h644h87h55fh74hdfh5d8h5bfh6dq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n668h589hd5h656h64fh54fh545h66ch99h89h5c6h556h68bh59ch5b8h569h65h5b8h7dh5bfh78h67dhdhdfhc4hd5hfh589h575hch7bh59bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndkps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(10d582b9-9e4a-4ce4-8763-addb194c9ced): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.463395 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lt4b\" (UniqueName: \"kubernetes.io/projected/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-kube-api-access-4lt4b\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.485391 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config" (OuterVolumeSpecName: "config") pod "8004c8c7-3187-4400-bd77-4e7cd0c3dd71" (UID: "8004c8c7-3187-4400-bd77-4e7cd0c3dd71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.520714 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8004c8c7-3187-4400-bd77-4e7cd0c3dd71" (UID: "8004c8c7-3187-4400-bd77-4e7cd0c3dd71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.549488 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8004c8c7-3187-4400-bd77-4e7cd0c3dd71" (UID: "8004c8c7-3187-4400-bd77-4e7cd0c3dd71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.565820 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.565860 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.565869 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8004c8c7-3187-4400-bd77-4e7cd0c3dd71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.807858 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c\": context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.808588 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:e8809605-7603-4d84-b522-211b9ab6758a,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75pnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-xwx84_openstack(548f1177-df4c-4b50-920f-f5b9ff95c283): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c\": context canceled" logger="UnhandledError" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.810036 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:0bc733754f4824dbae9e2cabcc75cca37f0ce17d723bd0b11a9c50b6c4183c6c\\\": context canceled\"" pod="openstack/swift-ring-rebalance-xwx84" podUID="548f1177-df4c-4b50-920f-f5b9ff95c283" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.925724 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.927385 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-sz6p9" event={"ID":"8004c8c7-3187-4400-bd77-4e7cd0c3dd71","Type":"ContainerDied","Data":"93a986b2d44410faf9ea8cdf266d4aec16c833f2dbac06064a60a43767e61bf6"} Feb 26 08:38:28 crc kubenswrapper[4741]: I0226 08:38:28.927495 4741 scope.go:117] "RemoveContainer" containerID="53bb994d615ae6b4258c96569adfe6f2e43c52ef82c88c4d4b74eb091e6f92ae" Feb 26 08:38:28 crc kubenswrapper[4741]: E0226 08:38:28.934140 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified\\\"\"" pod="openstack/swift-ring-rebalance-xwx84" podUID="548f1177-df4c-4b50-920f-f5b9ff95c283" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.047191 4741 scope.go:117] "RemoveContainer" containerID="2b4202bdab92e0cd0a5aace3903a8ed81e412312c1b164e763aa35f3146f1b3f" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.144225 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.181906 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-sz6p9"] Feb 26 08:38:29 crc kubenswrapper[4741]: E0226 08:38:29.279427 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="10d582b9-9e4a-4ce4-8763-addb194c9ced" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.722923 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.810702 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" path="/var/lib/kubelet/pods/8004c8c7-3187-4400-bd77-4e7cd0c3dd71/volumes" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.939354 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerStarted","Data":"5845e157fcbb2c36269cfc9d78efb2946b8754682d3f6c5e9cee686f53cd9f11"} Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.939758 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.943281 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10d582b9-9e4a-4ce4-8763-addb194c9ced","Type":"ContainerStarted","Data":"33dbf9dbc3ae55dc476779505d77121682a716f56872c660c4c80f61ba6fe654"} Feb 26 08:38:29 crc kubenswrapper[4741]: E0226 08:38:29.945973 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="10d582b9-9e4a-4ce4-8763-addb194c9ced" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.947402 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h2ft9" event={"ID":"4f9888af-7f4e-4ed5-afb4-b13215010297","Type":"ContainerStarted","Data":"3c6a07a07e5b00e3beb1102eea3972511bb76fbbfec83bc6e034956cff7d881a"} Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.947484 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.947568 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.948985 4741 generic.go:334] "Generic (PLEG): container finished" podID="8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" containerID="4f564089c14874fe96bd1bc26baaff5abf64f083fde542dd389dcfa57693b7ea" exitCode=0 Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.949045 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534918-npnn9" event={"ID":"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83","Type":"ContainerDied","Data":"4f564089c14874fe96bd1bc26baaff5abf64f083fde542dd389dcfa57693b7ea"} Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.981197 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371891.873608 podStartE2EDuration="2m24.981168597s" podCreationTimestamp="2026-02-26 08:36:05 +0000 UTC" firstStartedPulling="2026-02-26 08:36:08.839547061 +0000 UTC m=+1403.835484438" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:29.971851514 +0000 UTC m=+1544.967788901" watchObservedRunningTime="2026-02-26 08:38:29.981168597 +0000 UTC m=+1544.977105984" Feb 26 08:38:29 crc kubenswrapper[4741]: I0226 08:38:29.997949 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-h2ft9" podStartSLOduration=34.474166457 podStartE2EDuration="2m13.997929621s" podCreationTimestamp="2026-02-26 08:36:16 +0000 UTC" firstStartedPulling="2026-02-26 08:36:28.651084228 +0000 UTC m=+1423.647021615" lastFinishedPulling="2026-02-26 08:38:08.174847392 +0000 UTC m=+1523.170784779" observedRunningTime="2026-02-26 08:38:29.994961067 +0000 UTC m=+1544.990898444" watchObservedRunningTime="2026-02-26 08:38:29.997929621 +0000 UTC m=+1544.993867008" Feb 26 08:38:30 crc kubenswrapper[4741]: E0226 08:38:30.963003 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="10d582b9-9e4a-4ce4-8763-addb194c9ced" Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.374195 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.480176 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnj96\" (UniqueName: \"kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96\") pod \"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83\" (UID: \"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83\") " Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.495690 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96" (OuterVolumeSpecName: "kube-api-access-nnj96") pod "8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" (UID: "8da47daf-0aba-4cf1-bcd5-585a7b3e2b83"). InnerVolumeSpecName "kube-api-access-nnj96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.584359 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnj96\" (UniqueName: \"kubernetes.io/projected/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83-kube-api-access-nnj96\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.972625 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534918-npnn9" event={"ID":"8da47daf-0aba-4cf1-bcd5-585a7b3e2b83","Type":"ContainerDied","Data":"2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856"} Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.972676 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534918-npnn9" Feb 26 08:38:31 crc kubenswrapper[4741]: I0226 08:38:31.972688 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a19f65446cdcfa90315ea7ad3f0a308dc3915b0e96e92474a9e6a8684b86856" Feb 26 08:38:32 crc kubenswrapper[4741]: I0226 08:38:32.562280 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534912-tnzrt"] Feb 26 08:38:32 crc kubenswrapper[4741]: I0226 08:38:32.597357 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534912-tnzrt"] Feb 26 08:38:32 crc kubenswrapper[4741]: I0226 08:38:32.983888 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerStarted","Data":"8adfed2f7411ea18b35d70d9f92b66b3dd54ef704f1eafcbdcbcd1fd19d7544a"} Feb 26 08:38:33 crc kubenswrapper[4741]: I0226 08:38:33.801055 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1907a87-1ff8-4d9b-a81e-05f26f468a4d" path="/var/lib/kubelet/pods/e1907a87-1ff8-4d9b-a81e-05f26f468a4d/volumes" Feb 26 08:38:33 crc kubenswrapper[4741]: I0226 08:38:33.910149 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:38:33 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:38:33 crc kubenswrapper[4741]: > Feb 26 08:38:35 crc kubenswrapper[4741]: I0226 08:38:35.012225 4741 generic.go:334] "Generic (PLEG): container finished" podID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" containerID="03b0841a99ba4b0fd9f597780550f670b4313a79053b26d4168b8b0f27c79411" exitCode=0 Feb 26 08:38:35 crc kubenswrapper[4741]: I0226 08:38:35.012370 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ed8ae863-261b-4cbd-945a-b79c99fa0a9f","Type":"ContainerDied","Data":"03b0841a99ba4b0fd9f597780550f670b4313a79053b26d4168b8b0f27c79411"} Feb 26 08:38:37 crc kubenswrapper[4741]: I0226 08:38:37.669414 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 08:38:37 crc kubenswrapper[4741]: I0226 08:38:37.723840 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.106018 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tmkws"] Feb 26 08:38:38 crc kubenswrapper[4741]: E0226 08:38:38.107102 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="init" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107138 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="init" Feb 26 08:38:38 crc kubenswrapper[4741]: E0226 08:38:38.107153 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107159 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: E0226 08:38:38.107184 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" containerName="oc" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107192 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" containerName="oc" Feb 26 08:38:38 crc kubenswrapper[4741]: E0226 08:38:38.107214 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107221 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: E0226 08:38:38.107239 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="init" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107248 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="init" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107498 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" containerName="oc" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107510 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8004c8c7-3187-4400-bd77-4e7cd0c3dd71" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.107525 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae4b643-626b-412e-b8e7-33844ae9610d" containerName="dnsmasq-dns" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.109714 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.115162 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.122009 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tmkws"] Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.204319 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.204505 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t6r6\" (UniqueName: \"kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.300253 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.306593 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.307409 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.307572 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t6r6\" (UniqueName: \"kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.329505 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t6r6\" (UniqueName: \"kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6\") pod \"root-account-create-update-tmkws\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:38 crc kubenswrapper[4741]: I0226 08:38:38.525663 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:39 crc kubenswrapper[4741]: I0226 08:38:39.029916 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tmkws"] Feb 26 08:38:39 crc kubenswrapper[4741]: W0226 08:38:39.037082 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5d62a09_a551_4192_8c8b_23f6eb9d1af6.slice/crio-0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4 WatchSource:0}: Error finding container 0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4: Status 404 returned error can't find the container with id 0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4 Feb 26 08:38:39 crc kubenswrapper[4741]: I0226 08:38:39.070649 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tmkws" event={"ID":"a5d62a09-a551-4192-8c8b-23f6eb9d1af6","Type":"ContainerStarted","Data":"0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4"} Feb 26 08:38:39 crc kubenswrapper[4741]: I0226 08:38:39.073460 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ed8ae863-261b-4cbd-945a-b79c99fa0a9f","Type":"ContainerStarted","Data":"1175ad5a378fde9ce8ea2b88685227c351fe8b0bea0b8a6e5f90bbe8fac5c3f2"} Feb 26 08:38:39 crc kubenswrapper[4741]: I0226 08:38:39.103498 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371883.751303 podStartE2EDuration="2m33.103472195s" podCreationTimestamp="2026-02-26 08:36:06 +0000 UTC" firstStartedPulling="2026-02-26 08:36:08.84233887 +0000 UTC m=+1403.838276257" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:39.097249339 +0000 UTC m=+1554.093186746" watchObservedRunningTime="2026-02-26 08:38:39.103472195 +0000 UTC m=+1554.099409572" Feb 26 08:38:40 crc kubenswrapper[4741]: I0226 08:38:40.084143 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tmkws" event={"ID":"a5d62a09-a551-4192-8c8b-23f6eb9d1af6","Type":"ContainerStarted","Data":"10ef8d5ce0856ab52a2b216b991f207ecd121aa918160cd4cb37863bdec80621"} Feb 26 08:38:40 crc kubenswrapper[4741]: E0226 08:38:40.336106 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" Feb 26 08:38:41 crc kubenswrapper[4741]: I0226 08:38:41.096950 4741 generic.go:334] "Generic (PLEG): container finished" podID="a5d62a09-a551-4192-8c8b-23f6eb9d1af6" containerID="10ef8d5ce0856ab52a2b216b991f207ecd121aa918160cd4cb37863bdec80621" exitCode=0 Feb 26 08:38:41 crc kubenswrapper[4741]: I0226 08:38:41.096987 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tmkws" event={"ID":"a5d62a09-a551-4192-8c8b-23f6eb9d1af6","Type":"ContainerDied","Data":"10ef8d5ce0856ab52a2b216b991f207ecd121aa918160cd4cb37863bdec80621"} Feb 26 08:38:41 crc kubenswrapper[4741]: I0226 08:38:41.102448 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerStarted","Data":"3f93c9aa49e680f892522c057c055824a8a7297e606c780848f44b7441d7f14b"} Feb 26 08:38:41 crc kubenswrapper[4741]: I0226 08:38:41.402783 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" containerName="ovn-controller" probeResult="failure" output=< Feb 26 08:38:41 crc kubenswrapper[4741]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 08:38:41 crc kubenswrapper[4741]: > Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.623477 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.734014 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t6r6\" (UniqueName: \"kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6\") pod \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.734458 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts\") pod \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\" (UID: \"a5d62a09-a551-4192-8c8b-23f6eb9d1af6\") " Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.745053 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5d62a09-a551-4192-8c8b-23f6eb9d1af6" (UID: "a5d62a09-a551-4192-8c8b-23f6eb9d1af6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.809983 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6" (OuterVolumeSpecName: "kube-api-access-2t6r6") pod "a5d62a09-a551-4192-8c8b-23f6eb9d1af6" (UID: "a5d62a09-a551-4192-8c8b-23f6eb9d1af6"). InnerVolumeSpecName "kube-api-access-2t6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.843553 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:42 crc kubenswrapper[4741]: I0226 08:38:42.844136 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t6r6\" (UniqueName: \"kubernetes.io/projected/a5d62a09-a551-4192-8c8b-23f6eb9d1af6-kube-api-access-2t6r6\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.130263 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerStarted","Data":"4e9bf0714e211d7a0ebc64610516c1b5da0890e119122e83377f545f842d7613"} Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.134260 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tmkws" event={"ID":"a5d62a09-a551-4192-8c8b-23f6eb9d1af6","Type":"ContainerDied","Data":"0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4"} Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.134319 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df19a68968d424cd84bd78c4311adb4540e6b2fd85f7f6ac179938dde7568a4" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.134282 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tmkws" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.170156 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.683285741 podStartE2EDuration="2m32.170133811s" podCreationTimestamp="2026-02-26 08:36:11 +0000 UTC" firstStartedPulling="2026-02-26 08:36:14.932472746 +0000 UTC m=+1409.928410133" lastFinishedPulling="2026-02-26 08:38:42.419320816 +0000 UTC m=+1557.415258203" observedRunningTime="2026-02-26 08:38:43.169802731 +0000 UTC m=+1558.165740128" watchObservedRunningTime="2026-02-26 08:38:43.170133811 +0000 UTC m=+1558.166071198" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.806285 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.806353 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.806484 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:43 crc kubenswrapper[4741]: I0226 08:38:43.938844 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:38:43 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:38:43 crc kubenswrapper[4741]: > Feb 26 08:38:44 crc kubenswrapper[4741]: I0226 08:38:44.145467 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:46 crc kubenswrapper[4741]: I0226 08:38:46.348804 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:46 crc kubenswrapper[4741]: I0226 08:38:46.430869 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" containerName="ovn-controller" probeResult="failure" output=< Feb 26 08:38:46 crc kubenswrapper[4741]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 08:38:46 crc kubenswrapper[4741]: > Feb 26 08:38:46 crc kubenswrapper[4741]: I0226 08:38:46.551041 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:38:47 crc kubenswrapper[4741]: I0226 08:38:47.175945 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="config-reloader" containerID="cri-o://8adfed2f7411ea18b35d70d9f92b66b3dd54ef704f1eafcbdcbcd1fd19d7544a" gracePeriod=600 Feb 26 08:38:47 crc kubenswrapper[4741]: I0226 08:38:47.176034 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="prometheus" containerID="cri-o://4e9bf0714e211d7a0ebc64610516c1b5da0890e119122e83377f545f842d7613" gracePeriod=600 Feb 26 08:38:47 crc kubenswrapper[4741]: I0226 08:38:47.176042 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="thanos-sidecar" containerID="cri-o://3f93c9aa49e680f892522c057c055824a8a7297e606c780848f44b7441d7f14b" gracePeriod=600 Feb 26 08:38:47 crc kubenswrapper[4741]: I0226 08:38:47.725321 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 26 08:38:47 crc kubenswrapper[4741]: I0226 08:38:47.785135 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.031412 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.031490 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.197555 4741 generic.go:334] "Generic (PLEG): container finished" podID="1e870a59-585e-4369-88d5-644e5034ad33" containerID="4e9bf0714e211d7a0ebc64610516c1b5da0890e119122e83377f545f842d7613" exitCode=0 Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.198039 4741 generic.go:334] "Generic (PLEG): container finished" podID="1e870a59-585e-4369-88d5-644e5034ad33" containerID="3f93c9aa49e680f892522c057c055824a8a7297e606c780848f44b7441d7f14b" exitCode=0 Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.198053 4741 generic.go:334] "Generic (PLEG): container finished" podID="1e870a59-585e-4369-88d5-644e5034ad33" containerID="8adfed2f7411ea18b35d70d9f92b66b3dd54ef704f1eafcbdcbcd1fd19d7544a" exitCode=0 Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.198084 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerDied","Data":"4e9bf0714e211d7a0ebc64610516c1b5da0890e119122e83377f545f842d7613"} Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.198165 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerDied","Data":"3f93c9aa49e680f892522c057c055824a8a7297e606c780848f44b7441d7f14b"} Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.198182 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerDied","Data":"8adfed2f7411ea18b35d70d9f92b66b3dd54ef704f1eafcbdcbcd1fd19d7544a"} Feb 26 08:38:48 crc kubenswrapper[4741]: I0226 08:38:48.790536 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.144:9090/-/ready\": dial tcp 10.217.0.144:9090: connect: connection refused" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.462863 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" containerName="ovn-controller" probeResult="failure" output=< Feb 26 08:38:51 crc kubenswrapper[4741]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 08:38:51 crc kubenswrapper[4741]: > Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.693873 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.744146 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.819174 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.819311 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.819350 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.819642 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzrgw\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.819720 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.820244 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.820352 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.820480 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.820522 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.821529 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.822021 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config\") pod \"1e870a59-585e-4369-88d5-644e5034ad33\" (UID: \"1e870a59-585e-4369-88d5-644e5034ad33\") " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.823439 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.824159 4741 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.824184 4741 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.825186 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.828642 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.831620 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw" (OuterVolumeSpecName: "kube-api-access-gzrgw") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "kube-api-access-gzrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.841815 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out" (OuterVolumeSpecName: "config-out") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.842228 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.845624 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config" (OuterVolumeSpecName: "config") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.874922 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "pvc-0e5d6940-44b4-47ce-a01f-6be827908482". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.883218 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config" (OuterVolumeSpecName: "web-config") pod "1e870a59-585e-4369-88d5-644e5034ad33" (UID: "1e870a59-585e-4369-88d5-644e5034ad33"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.921640 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927775 4741 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927812 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzrgw\" (UniqueName: \"kubernetes.io/projected/1e870a59-585e-4369-88d5-644e5034ad33-kube-api-access-gzrgw\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927823 4741 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-web-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927852 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") on node \"crc\" " Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927864 4741 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e870a59-585e-4369-88d5-644e5034ad33-config-out\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927877 4741 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927888 4741 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e870a59-585e-4369-88d5-644e5034ad33-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.927897 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e870a59-585e-4369-88d5-644e5034ad33-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.971231 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:38:51 crc kubenswrapper[4741]: I0226 08:38:51.971438 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e5d6940-44b4-47ce-a01f-6be827908482" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482") on node "crc" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.030645 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") on node \"crc\" DevicePath \"\"" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217409 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-zq6r6"] Feb 26 08:38:52 crc kubenswrapper[4741]: E0226 08:38:52.217882 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="init-config-reloader" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217901 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="init-config-reloader" Feb 26 08:38:52 crc kubenswrapper[4741]: E0226 08:38:52.217918 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="config-reloader" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217925 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="config-reloader" Feb 26 08:38:52 crc kubenswrapper[4741]: E0226 08:38:52.217941 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d62a09-a551-4192-8c8b-23f6eb9d1af6" containerName="mariadb-account-create-update" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217950 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d62a09-a551-4192-8c8b-23f6eb9d1af6" containerName="mariadb-account-create-update" Feb 26 08:38:52 crc kubenswrapper[4741]: E0226 08:38:52.217964 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="thanos-sidecar" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217970 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="thanos-sidecar" Feb 26 08:38:52 crc kubenswrapper[4741]: E0226 08:38:52.217986 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="prometheus" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.217992 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="prometheus" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.218195 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="prometheus" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.218212 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d62a09-a551-4192-8c8b-23f6eb9d1af6" containerName="mariadb-account-create-update" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.218225 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="thanos-sidecar" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.218239 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e870a59-585e-4369-88d5-644e5034ad33" containerName="config-reloader" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.218977 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.249275 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-zq6r6"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.251989 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.259099 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e870a59-585e-4369-88d5-644e5034ad33","Type":"ContainerDied","Data":"40259b1847451ba52844b69331bf34b077985ea24727b028d283ec7d4a9e4d21"} Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.259212 4741 scope.go:117] "RemoveContainer" containerID="4e9bf0714e211d7a0ebc64610516c1b5da0890e119122e83377f545f842d7613" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.425810 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-07e6-account-create-update-qwfjj"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.428198 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.437581 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.479227 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.479283 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zr6v\" (UniqueName: \"kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.497770 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-07e6-account-create-update-qwfjj"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.545811 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.574889 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.581580 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.581635 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zr6v\" (UniqueName: \"kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.581739 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.581773 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpmx2\" (UniqueName: \"kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.583066 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.585847 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.590523 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.594537 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.594669 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.595925 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.596461 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.596587 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.596728 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.596894 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.597097 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dhpr9" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.601880 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.604755 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.609507 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zr6v\" (UniqueName: \"kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v\") pod \"mysqld-exporter-openstack-db-create-zq6r6\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.683742 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684358 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684419 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684472 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684520 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684785 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684932 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.684968 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685012 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685129 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpmx2\" (UniqueName: \"kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685217 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/24120f9b-9d9b-4783-9dd9-2450215d3d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685259 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685301 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685396 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685666 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmzc\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-kube-api-access-csmzc\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.685792 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.712665 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpmx2\" (UniqueName: \"kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2\") pod \"mysqld-exporter-07e6-account-create-update-qwfjj\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.788564 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.789493 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmzc\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-kube-api-access-csmzc\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.789870 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.790081 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.790276 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.790463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.790675 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.790886 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791295 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791426 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791635 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/24120f9b-9d9b-4783-9dd9-2450215d3d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791730 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791820 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.791886 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.793605 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.794069 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.795326 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.796016 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.796092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/24120f9b-9d9b-4783-9dd9-2450215d3d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.798939 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.799753 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.799772 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/24120f9b-9d9b-4783-9dd9-2450215d3d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.800518 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.802741 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.809436 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/24120f9b-9d9b-4783-9dd9-2450215d3d26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.811886 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmzc\" (UniqueName: \"kubernetes.io/projected/24120f9b-9d9b-4783-9dd9-2450215d3d26-kube-api-access-csmzc\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.819783 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.819861 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/97a84e84ad55906c3177876b2db4bcb0d96d5603cd9c39a8a1a4f5e259f7d9f9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.837663 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.924591 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e5d6940-44b4-47ce-a01f-6be827908482\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e5d6940-44b4-47ce-a01f-6be827908482\") pod \"prometheus-metric-storage-0\" (UID: \"24120f9b-9d9b-4783-9dd9-2450215d3d26\") " pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:52 crc kubenswrapper[4741]: I0226 08:38:52.964127 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 08:38:53 crc kubenswrapper[4741]: I0226 08:38:53.306102 4741 scope.go:117] "RemoveContainer" containerID="3f93c9aa49e680f892522c057c055824a8a7297e606c780848f44b7441d7f14b" Feb 26 08:38:53 crc kubenswrapper[4741]: I0226 08:38:53.365031 4741 scope.go:117] "RemoveContainer" containerID="8adfed2f7411ea18b35d70d9f92b66b3dd54ef704f1eafcbdcbcd1fd19d7544a" Feb 26 08:38:53 crc kubenswrapper[4741]: I0226 08:38:53.598999 4741 scope.go:117] "RemoveContainer" containerID="1d5c9d1f24b9ef8be4bceadba12646b88d092e81a4e46b4d6d132ec83d69c54e" Feb 26 08:38:53 crc kubenswrapper[4741]: I0226 08:38:53.800560 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e870a59-585e-4369-88d5-644e5034ad33" path="/var/lib/kubelet/pods/1e870a59-585e-4369-88d5-644e5034ad33/volumes" Feb 26 08:38:53 crc kubenswrapper[4741]: I0226 08:38:53.935074 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:38:53 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:38:53 crc kubenswrapper[4741]: > Feb 26 08:38:54 crc kubenswrapper[4741]: W0226 08:38:54.036928 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce5e2451_f816_4a9a_a18d_806eb3f5cf79.slice/crio-0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a WatchSource:0}: Error finding container 0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a: Status 404 returned error can't find the container with id 0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.056284 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-zq6r6"] Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.082352 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-07e6-account-create-update-qwfjj"] Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.101052 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.278755 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xwx84" event={"ID":"548f1177-df4c-4b50-920f-f5b9ff95c283","Type":"ContainerStarted","Data":"2974879b5e173841f919882fcee2e3848b7b48261d59cae34e225a0bb0cb1e94"} Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.281565 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerStarted","Data":"be714cfcdda98b860ccb3eb3564139e23056ab06b301130adbc452666572672a"} Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.285188 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" event={"ID":"ce5e2451-f816-4a9a-a18d-806eb3f5cf79","Type":"ContainerStarted","Data":"0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a"} Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.287200 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" event={"ID":"85c980da-2e8f-4979-b5a8-760039e24ea8","Type":"ContainerStarted","Data":"587eb51702c39102f723225f3802bb78410e84eb4567427d1d0cdeda2bfe34e8"} Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.910709 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jgvpc"] Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.913587 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:54 crc kubenswrapper[4741]: I0226 08:38:54.961763 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jgvpc"] Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.078479 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.078708 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fvq\" (UniqueName: \"kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.100317 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2dff-account-create-update-nxlxz"] Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.102701 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.116600 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.136806 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2dff-account-create-update-nxlxz"] Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.181313 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.188765 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fvq\" (UniqueName: \"kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.190451 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.225704 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fvq\" (UniqueName: \"kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq\") pod \"glance-db-create-jgvpc\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.261166 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jgvpc" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.297531 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.297868 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kj5\" (UniqueName: \"kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.328507 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10d582b9-9e4a-4ce4-8763-addb194c9ced","Type":"ContainerStarted","Data":"b225d14214eaababb7e2ad0a1c31e60a1df15c7fd8b6b45456f910dc8346643e"} Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.331065 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" event={"ID":"ce5e2451-f816-4a9a-a18d-806eb3f5cf79","Type":"ContainerStarted","Data":"47911601efd911ea46aa92154f47fe5c8423a33303e8ae738bb20a5471054b43"} Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.335303 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" event={"ID":"85c980da-2e8f-4979-b5a8-760039e24ea8","Type":"ContainerStarted","Data":"555218fc003cd61346324b54cbd4fe9a364a6b4ec6b7081dac9d9b85e5786835"} Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.363500 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-xwx84" podStartSLOduration=18.051244944 podStartE2EDuration="1m2.363473967s" podCreationTimestamp="2026-02-26 08:37:53 +0000 UTC" firstStartedPulling="2026-02-26 08:38:09.013913681 +0000 UTC m=+1524.009851068" lastFinishedPulling="2026-02-26 08:38:53.326142704 +0000 UTC m=+1568.322080091" observedRunningTime="2026-02-26 08:38:55.358795165 +0000 UTC m=+1570.354732552" watchObservedRunningTime="2026-02-26 08:38:55.363473967 +0000 UTC m=+1570.359411354" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.400657 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kj5\" (UniqueName: \"kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.400763 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.401881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.420914 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kj5\" (UniqueName: \"kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5\") pod \"glance-2dff-account-create-update-nxlxz\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.435475 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:38:55 crc kubenswrapper[4741]: I0226 08:38:55.984471 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2dff-account-create-update-nxlxz"] Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.073866 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jgvpc"] Feb 26 08:38:56 crc kubenswrapper[4741]: W0226 08:38:56.075231 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d5b3c3_ad6b_4a93_86f5_842d24d6c20b.slice/crio-669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1 WatchSource:0}: Error finding container 669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1: Status 404 returned error can't find the container with id 669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1 Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.349276 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jgvpc" event={"ID":"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b","Type":"ContainerStarted","Data":"669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1"} Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.351497 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dff-account-create-update-nxlxz" event={"ID":"eeddba64-b6cc-4ae0-8c09-8252931e1778","Type":"ContainerStarted","Data":"ed0ea9fef86c18b931992cad034abfec004ae6ab6891fa5482c1d3e1d84bca0a"} Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.351958 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.389670 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" podStartSLOduration=4.389638102 podStartE2EDuration="4.389638102s" podCreationTimestamp="2026-02-26 08:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:56.379220327 +0000 UTC m=+1571.375157714" watchObservedRunningTime="2026-02-26 08:38:56.389638102 +0000 UTC m=+1571.385575489" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.407567 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=22.540016282 podStartE2EDuration="1m9.407539s" podCreationTimestamp="2026-02-26 08:37:47 +0000 UTC" firstStartedPulling="2026-02-26 08:38:06.475909856 +0000 UTC m=+1521.471847243" lastFinishedPulling="2026-02-26 08:38:53.343432584 +0000 UTC m=+1568.339369961" observedRunningTime="2026-02-26 08:38:56.402210099 +0000 UTC m=+1571.398147486" watchObservedRunningTime="2026-02-26 08:38:56.407539 +0000 UTC m=+1571.403476397" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.457307 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" containerName="ovn-controller" probeResult="failure" output=< Feb 26 08:38:56 crc kubenswrapper[4741]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 08:38:56 crc kubenswrapper[4741]: > Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.612860 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tmkws"] Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.621689 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tmkws"] Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.689002 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v24ql"] Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.690955 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.695285 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.709836 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v24ql"] Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.838524 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwrk\" (UniqueName: \"kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.838625 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.941395 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwrk\" (UniqueName: \"kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.941490 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:56 crc kubenswrapper[4741]: I0226 08:38:56.943020 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.202283 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwrk\" (UniqueName: \"kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk\") pod \"root-account-create-update-v24ql\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.318160 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v24ql" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.398935 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" podStartSLOduration=5.398915738 podStartE2EDuration="5.398915738s" podCreationTimestamp="2026-02-26 08:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:57.387542515 +0000 UTC m=+1572.383479922" watchObservedRunningTime="2026-02-26 08:38:57.398915738 +0000 UTC m=+1572.394853125" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.455075 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:38:57 crc kubenswrapper[4741]: E0226 08:38:57.455707 4741 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 08:38:57 crc kubenswrapper[4741]: E0226 08:38:57.456103 4741 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 08:38:57 crc kubenswrapper[4741]: E0226 08:38:57.456295 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift podName:91b0231b-fbdf-4714-ac14-d3621c8c7807 nodeName:}" failed. No retries permitted until 2026-02-26 08:40:01.456259114 +0000 UTC m=+1636.452196491 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift") pod "swift-storage-0" (UID: "91b0231b-fbdf-4714-ac14-d3621c8c7807") : configmap "swift-ring-files" not found Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.776798 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.801857 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d62a09-a551-4192-8c8b-23f6eb9d1af6" path="/var/lib/kubelet/pods/a5d62a09-a551-4192-8c8b-23f6eb9d1af6/volumes" Feb 26 08:38:57 crc kubenswrapper[4741]: I0226 08:38:57.869899 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v24ql"] Feb 26 08:38:58 crc kubenswrapper[4741]: I0226 08:38:58.388637 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dff-account-create-update-nxlxz" event={"ID":"eeddba64-b6cc-4ae0-8c09-8252931e1778","Type":"ContainerStarted","Data":"f17af16b16c06fb36c44e9c40d492057f3ab8b59aaf40c4221e2b545a1921c0a"} Feb 26 08:38:58 crc kubenswrapper[4741]: I0226 08:38:58.391201 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jgvpc" event={"ID":"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b","Type":"ContainerStarted","Data":"81aa89192a2e11ea8f9025896484c4a7ef67efeca2a43b7274dd5fff20d4b718"} Feb 26 08:38:58 crc kubenswrapper[4741]: I0226 08:38:58.392032 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v24ql" event={"ID":"04f305d3-533d-46ac-9db3-fd55e864eb83","Type":"ContainerStarted","Data":"cc4b3f079c538a7ea723873554332ea013a897503cc0b31e84a95cb533042575"} Feb 26 08:38:59 crc kubenswrapper[4741]: I0226 08:38:59.404424 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerStarted","Data":"cff3e36d89d4668265be9293420d4fb1167b136812e855b906ffbbe078fd1f09"} Feb 26 08:38:59 crc kubenswrapper[4741]: I0226 08:38:59.408099 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v24ql" event={"ID":"04f305d3-533d-46ac-9db3-fd55e864eb83","Type":"ContainerStarted","Data":"3f3662413c3512f9cd116f9981d6dac17865fdbb696b19dd45b77eeecebe71d5"} Feb 26 08:38:59 crc kubenswrapper[4741]: I0226 08:38:59.447025 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2dff-account-create-update-nxlxz" podStartSLOduration=4.446999447 podStartE2EDuration="4.446999447s" podCreationTimestamp="2026-02-26 08:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:59.43226426 +0000 UTC m=+1574.428201647" watchObservedRunningTime="2026-02-26 08:38:59.446999447 +0000 UTC m=+1574.442936834" Feb 26 08:38:59 crc kubenswrapper[4741]: I0226 08:38:59.472245 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-jgvpc" podStartSLOduration=5.472214143 podStartE2EDuration="5.472214143s" podCreationTimestamp="2026-02-26 08:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:59.4551895 +0000 UTC m=+1574.451126907" watchObservedRunningTime="2026-02-26 08:38:59.472214143 +0000 UTC m=+1574.468151540" Feb 26 08:38:59 crc kubenswrapper[4741]: I0226 08:38:59.503503 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-v24ql" podStartSLOduration=3.503467259 podStartE2EDuration="3.503467259s" podCreationTimestamp="2026-02-26 08:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:38:59.486271671 +0000 UTC m=+1574.482209048" watchObservedRunningTime="2026-02-26 08:38:59.503467259 +0000 UTC m=+1574.499404646" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.420209 4741 generic.go:334] "Generic (PLEG): container finished" podID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" containerID="47911601efd911ea46aa92154f47fe5c8423a33303e8ae738bb20a5471054b43" exitCode=0 Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.420686 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" event={"ID":"ce5e2451-f816-4a9a-a18d-806eb3f5cf79","Type":"ContainerDied","Data":"47911601efd911ea46aa92154f47fe5c8423a33303e8ae738bb20a5471054b43"} Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.430890 4741 generic.go:334] "Generic (PLEG): container finished" podID="24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" containerID="81aa89192a2e11ea8f9025896484c4a7ef67efeca2a43b7274dd5fff20d4b718" exitCode=0 Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.432282 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jgvpc" event={"ID":"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b","Type":"ContainerDied","Data":"81aa89192a2e11ea8f9025896484c4a7ef67efeca2a43b7274dd5fff20d4b718"} Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.550062 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6rxbw"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.553689 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.571239 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6rxbw"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.678984 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.679251 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2cw\" (UniqueName: \"kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.679953 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-abee-account-create-update-jg7s7"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.686974 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.692296 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.717562 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-abee-account-create-update-jg7s7"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.782577 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.782910 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.783050 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkk9x\" (UniqueName: \"kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.783164 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2cw\" (UniqueName: \"kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.784282 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.815310 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2cw\" (UniqueName: \"kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw\") pod \"keystone-db-create-6rxbw\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.845622 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-t7rkz"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.849536 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.871470 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-t7rkz"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.885530 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkk9x\" (UniqueName: \"kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.885704 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.887254 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.900729 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.921798 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkk9x\" (UniqueName: \"kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x\") pod \"keystone-abee-account-create-update-jg7s7\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.958848 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-26f9-account-create-update-dsrjk"] Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.960899 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.963647 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.989196 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2x52\" (UniqueName: \"kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:00 crc kubenswrapper[4741]: I0226 08:39:00.989327 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.002501 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-26f9-account-create-update-dsrjk"] Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.082586 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.093654 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.094320 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mpf\" (UniqueName: \"kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.094467 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2x52\" (UniqueName: \"kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.094513 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.095645 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.136747 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2x52\" (UniqueName: \"kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52\") pod \"placement-db-create-t7rkz\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.196949 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mpf\" (UniqueName: \"kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.197203 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.197488 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.198697 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.219545 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mpf\" (UniqueName: \"kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf\") pod \"placement-26f9-account-create-update-dsrjk\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.354463 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.431021 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-stlzj" podUID="6a8ae1f8-db05-4bc6-a470-60c58ec57f8c" containerName="ovn-controller" probeResult="failure" output=< Feb 26 08:39:01 crc kubenswrapper[4741]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 08:39:01 crc kubenswrapper[4741]: > Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.477915 4741 generic.go:334] "Generic (PLEG): container finished" podID="eeddba64-b6cc-4ae0-8c09-8252931e1778" containerID="f17af16b16c06fb36c44e9c40d492057f3ab8b59aaf40c4221e2b545a1921c0a" exitCode=0 Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.478013 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dff-account-create-update-nxlxz" event={"ID":"eeddba64-b6cc-4ae0-8c09-8252931e1778","Type":"ContainerDied","Data":"f17af16b16c06fb36c44e9c40d492057f3ab8b59aaf40c4221e2b545a1921c0a"} Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.487468 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6rxbw"] Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.500631 4741 generic.go:334] "Generic (PLEG): container finished" podID="85c980da-2e8f-4979-b5a8-760039e24ea8" containerID="555218fc003cd61346324b54cbd4fe9a364a6b4ec6b7081dac9d9b85e5786835" exitCode=0 Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.500711 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" event={"ID":"85c980da-2e8f-4979-b5a8-760039e24ea8","Type":"ContainerDied","Data":"555218fc003cd61346324b54cbd4fe9a364a6b4ec6b7081dac9d9b85e5786835"} Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.531768 4741 generic.go:334] "Generic (PLEG): container finished" podID="04f305d3-533d-46ac-9db3-fd55e864eb83" containerID="3f3662413c3512f9cd116f9981d6dac17865fdbb696b19dd45b77eeecebe71d5" exitCode=0 Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.532333 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v24ql" event={"ID":"04f305d3-533d-46ac-9db3-fd55e864eb83","Type":"ContainerDied","Data":"3f3662413c3512f9cd116f9981d6dac17865fdbb696b19dd45b77eeecebe71d5"} Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.614426 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h2ft9" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.698202 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-abee-account-create-update-jg7s7"] Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.900765 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-stlzj-config-84ljd"] Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.903547 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.906761 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.919255 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-stlzj-config-84ljd"] Feb 26 08:39:01 crc kubenswrapper[4741]: I0226 08:39:01.964230 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-t7rkz"] Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035620 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035681 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035718 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035747 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035882 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.035907 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrt4\" (UniqueName: \"kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.137788 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.137860 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.137899 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.137928 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.138085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.138119 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrt4\" (UniqueName: \"kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.138329 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.138447 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.139251 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.139309 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.153286 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.227097 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrt4\" (UniqueName: \"kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4\") pod \"ovn-controller-stlzj-config-84ljd\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.363336 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.554932 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jgvpc" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.559361 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rxbw" event={"ID":"1aa2ef24-5972-42d6-b38e-adaef893b130","Type":"ContainerStarted","Data":"5d59b3a2f05c1e1be2c355709693839d893192716e8ab99fa18422e08892be4d"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.559415 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rxbw" event={"ID":"1aa2ef24-5972-42d6-b38e-adaef893b130","Type":"ContainerStarted","Data":"7ffbaff6e7ebf69ee8e315fa625948b7e958f5808b353179e3cfcc5b9b87e7be"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.563434 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.588953 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" event={"ID":"ce5e2451-f816-4a9a-a18d-806eb3f5cf79","Type":"ContainerDied","Data":"0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.589027 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e0fed2046c9cfc376ffb4ef03813b66dc50b0b000ad7c06f8ff1a4671c2fb3a" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.607806 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t7rkz" event={"ID":"10ea5eb4-4f56-417f-84fd-5ae940e74516","Type":"ContainerStarted","Data":"c21e683f566828af44518b4f575ed232daac8ace1edce1865551c83b96111d30"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.648886 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-abee-account-create-update-jg7s7" event={"ID":"e00378f6-b3d9-40f7-889c-a6cce27718c4","Type":"ContainerStarted","Data":"49492da9e102c1d580506b4d5f4b63b99805c13713c27abd920fecfc7940b33d"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.648964 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-abee-account-create-update-jg7s7" event={"ID":"e00378f6-b3d9-40f7-889c-a6cce27718c4","Type":"ContainerStarted","Data":"13c239a93ed35f873614b99c16dac86d28418f403ea20c4de7fbf48050e22ef8"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.659507 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce5e2451-f816-4a9a-a18d-806eb3f5cf79" (UID: "ce5e2451-f816-4a9a-a18d-806eb3f5cf79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.663436 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-26f9-account-create-update-dsrjk"] Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.659728 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts\") pod \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.664027 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2fvq\" (UniqueName: \"kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq\") pod \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.664456 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts\") pod \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\" (UID: \"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b\") " Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.664564 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zr6v\" (UniqueName: \"kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v\") pod \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\" (UID: \"ce5e2451-f816-4a9a-a18d-806eb3f5cf79\") " Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.667246 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" (UID: "24d5b3c3-ad6b-4a93-86f5-842d24d6c20b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.667515 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.692902 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq" (OuterVolumeSpecName: "kube-api-access-n2fvq") pod "24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" (UID: "24d5b3c3-ad6b-4a93-86f5-842d24d6c20b"). InnerVolumeSpecName "kube-api-access-n2fvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.711393 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jgvpc" event={"ID":"24d5b3c3-ad6b-4a93-86f5-842d24d6c20b","Type":"ContainerDied","Data":"669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1"} Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.711469 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669651a75d2fee54606d18591cc7699e1240210a46455390f033ab92186d67b1" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.711750 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jgvpc" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.723662 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v" (OuterVolumeSpecName: "kube-api-access-8zr6v") pod "ce5e2451-f816-4a9a-a18d-806eb3f5cf79" (UID: "ce5e2451-f816-4a9a-a18d-806eb3f5cf79"). InnerVolumeSpecName "kube-api-access-8zr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.769600 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6rxbw" podStartSLOduration=2.769561114 podStartE2EDuration="2.769561114s" podCreationTimestamp="2026-02-26 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:02.648639785 +0000 UTC m=+1577.644577172" watchObservedRunningTime="2026-02-26 08:39:02.769561114 +0000 UTC m=+1577.765498531" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.801125 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-abee-account-create-update-jg7s7" podStartSLOduration=2.801083898 podStartE2EDuration="2.801083898s" podCreationTimestamp="2026-02-26 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:02.699677192 +0000 UTC m=+1577.695614579" watchObservedRunningTime="2026-02-26 08:39:02.801083898 +0000 UTC m=+1577.797021285" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.882004 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2fvq\" (UniqueName: \"kubernetes.io/projected/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-kube-api-access-n2fvq\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.890065 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:02 crc kubenswrapper[4741]: I0226 08:39:02.890121 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zr6v\" (UniqueName: \"kubernetes.io/projected/ce5e2451-f816-4a9a-a18d-806eb3f5cf79-kube-api-access-8zr6v\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.636909 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.657151 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v24ql" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.731770 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfwrk\" (UniqueName: \"kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk\") pod \"04f305d3-533d-46ac-9db3-fd55e864eb83\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.731936 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts\") pod \"04f305d3-533d-46ac-9db3-fd55e864eb83\" (UID: \"04f305d3-533d-46ac-9db3-fd55e864eb83\") " Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.731988 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts\") pod \"eeddba64-b6cc-4ae0-8c09-8252931e1778\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.732175 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56kj5\" (UniqueName: \"kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5\") pod \"eeddba64-b6cc-4ae0-8c09-8252931e1778\" (UID: \"eeddba64-b6cc-4ae0-8c09-8252931e1778\") " Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.735781 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eeddba64-b6cc-4ae0-8c09-8252931e1778" (UID: "eeddba64-b6cc-4ae0-8c09-8252931e1778"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.736362 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04f305d3-533d-46ac-9db3-fd55e864eb83" (UID: "04f305d3-533d-46ac-9db3-fd55e864eb83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.737415 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04f305d3-533d-46ac-9db3-fd55e864eb83-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.737510 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeddba64-b6cc-4ae0-8c09-8252931e1778-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.753179 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-stlzj-config-84ljd"] Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.762313 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v24ql" event={"ID":"04f305d3-533d-46ac-9db3-fd55e864eb83","Type":"ContainerDied","Data":"cc4b3f079c538a7ea723873554332ea013a897503cc0b31e84a95cb533042575"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.762360 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4b3f079c538a7ea723873554332ea013a897503cc0b31e84a95cb533042575" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.762462 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v24ql" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.771725 4741 generic.go:334] "Generic (PLEG): container finished" podID="1aa2ef24-5972-42d6-b38e-adaef893b130" containerID="5d59b3a2f05c1e1be2c355709693839d893192716e8ab99fa18422e08892be4d" exitCode=0 Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.771830 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rxbw" event={"ID":"1aa2ef24-5972-42d6-b38e-adaef893b130","Type":"ContainerDied","Data":"5d59b3a2f05c1e1be2c355709693839d893192716e8ab99fa18422e08892be4d"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.774765 4741 generic.go:334] "Generic (PLEG): container finished" podID="10ea5eb4-4f56-417f-84fd-5ae940e74516" containerID="ed81ea1b45a6117535ca7d1d4b879dbe45c3a8ed95ea4cb344ab4cef24aeae82" exitCode=0 Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.774921 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t7rkz" event={"ID":"10ea5eb4-4f56-417f-84fd-5ae940e74516","Type":"ContainerDied","Data":"ed81ea1b45a6117535ca7d1d4b879dbe45c3a8ed95ea4cb344ab4cef24aeae82"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.776691 4741 generic.go:334] "Generic (PLEG): container finished" podID="e00378f6-b3d9-40f7-889c-a6cce27718c4" containerID="49492da9e102c1d580506b4d5f4b63b99805c13713c27abd920fecfc7940b33d" exitCode=0 Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.776811 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-abee-account-create-update-jg7s7" event={"ID":"e00378f6-b3d9-40f7-889c-a6cce27718c4","Type":"ContainerDied","Data":"49492da9e102c1d580506b4d5f4b63b99805c13713c27abd920fecfc7940b33d"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.778394 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-26f9-account-create-update-dsrjk" event={"ID":"93463751-5c16-4d33-abad-392b566eef58","Type":"ContainerStarted","Data":"d1b8da48215504119ad9671b68c4329ad275e573c944c1388fb5f60d5c187ce5"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.778422 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-26f9-account-create-update-dsrjk" event={"ID":"93463751-5c16-4d33-abad-392b566eef58","Type":"ContainerStarted","Data":"dc74a49511cbeb94e0a667561321d7704b9213cc2b84ec4c922efe1ec9537610"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.785256 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dff-account-create-update-nxlxz" event={"ID":"eeddba64-b6cc-4ae0-8c09-8252931e1778","Type":"ContainerDied","Data":"ed0ea9fef86c18b931992cad034abfec004ae6ab6891fa5482c1d3e1d84bca0a"} Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.785287 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dff-account-create-update-nxlxz" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.785318 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0ea9fef86c18b931992cad034abfec004ae6ab6891fa5482c1d3e1d84bca0a" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.785276 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.837078 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk" (OuterVolumeSpecName: "kube-api-access-wfwrk") pod "04f305d3-533d-46ac-9db3-fd55e864eb83" (UID: "04f305d3-533d-46ac-9db3-fd55e864eb83"). InnerVolumeSpecName "kube-api-access-wfwrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.850402 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5" (OuterVolumeSpecName: "kube-api-access-56kj5") pod "eeddba64-b6cc-4ae0-8c09-8252931e1778" (UID: "eeddba64-b6cc-4ae0-8c09-8252931e1778"). InnerVolumeSpecName "kube-api-access-56kj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.870929 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfwrk\" (UniqueName: \"kubernetes.io/projected/04f305d3-533d-46ac-9db3-fd55e864eb83-kube-api-access-wfwrk\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.870978 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56kj5\" (UniqueName: \"kubernetes.io/projected/eeddba64-b6cc-4ae0-8c09-8252931e1778-kube-api-access-56kj5\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:03 crc kubenswrapper[4741]: I0226 08:39:03.879587 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-26f9-account-create-update-dsrjk" podStartSLOduration=3.879555337 podStartE2EDuration="3.879555337s" podCreationTimestamp="2026-02-26 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:03.840233522 +0000 UTC m=+1578.836170909" watchObservedRunningTime="2026-02-26 08:39:03.879555337 +0000 UTC m=+1578.875492724" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.022330 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:39:04 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:39:04 crc kubenswrapper[4741]: > Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.160726 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.301725 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpmx2\" (UniqueName: \"kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2\") pod \"85c980da-2e8f-4979-b5a8-760039e24ea8\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.301832 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts\") pod \"85c980da-2e8f-4979-b5a8-760039e24ea8\" (UID: \"85c980da-2e8f-4979-b5a8-760039e24ea8\") " Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.303974 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85c980da-2e8f-4979-b5a8-760039e24ea8" (UID: "85c980da-2e8f-4979-b5a8-760039e24ea8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.334653 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2" (OuterVolumeSpecName: "kube-api-access-xpmx2") pod "85c980da-2e8f-4979-b5a8-760039e24ea8" (UID: "85c980da-2e8f-4979-b5a8-760039e24ea8"). InnerVolumeSpecName "kube-api-access-xpmx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.405737 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpmx2\" (UniqueName: \"kubernetes.io/projected/85c980da-2e8f-4979-b5a8-760039e24ea8-kube-api-access-xpmx2\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.406179 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c980da-2e8f-4979-b5a8-760039e24ea8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.805866 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-stlzj-config-84ljd" event={"ID":"f76a21fd-2727-4fa8-acfc-02fc255c2d1f","Type":"ContainerStarted","Data":"d78751a0e72f35abb6bc147d483c0ff3c66ce29e5d4203c12e0edceebcab8a95"} Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.805934 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-stlzj-config-84ljd" event={"ID":"f76a21fd-2727-4fa8-acfc-02fc255c2d1f","Type":"ContainerStarted","Data":"5328162e7f74fd7e8e95da9c37d02a2b6d82ead0f7ce4c4bce6c450fda037143"} Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.809490 4741 generic.go:334] "Generic (PLEG): container finished" podID="93463751-5c16-4d33-abad-392b566eef58" containerID="d1b8da48215504119ad9671b68c4329ad275e573c944c1388fb5f60d5c187ce5" exitCode=0 Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.809628 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-26f9-account-create-update-dsrjk" event={"ID":"93463751-5c16-4d33-abad-392b566eef58","Type":"ContainerDied","Data":"d1b8da48215504119ad9671b68c4329ad275e573c944c1388fb5f60d5c187ce5"} Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.811762 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.813578 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-07e6-account-create-update-qwfjj" event={"ID":"85c980da-2e8f-4979-b5a8-760039e24ea8","Type":"ContainerDied","Data":"587eb51702c39102f723225f3802bb78410e84eb4567427d1d0cdeda2bfe34e8"} Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.813620 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587eb51702c39102f723225f3802bb78410e84eb4567427d1d0cdeda2bfe34e8" Feb 26 08:39:04 crc kubenswrapper[4741]: I0226 08:39:04.843384 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-stlzj-config-84ljd" podStartSLOduration=3.843349792 podStartE2EDuration="3.843349792s" podCreationTimestamp="2026-02-26 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:04.837296121 +0000 UTC m=+1579.833233508" watchObservedRunningTime="2026-02-26 08:39:04.843349792 +0000 UTC m=+1579.839287209" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.361207 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wz7kr"] Feb 26 08:39:05 crc kubenswrapper[4741]: E0226 08:39:05.362277 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f305d3-533d-46ac-9db3-fd55e864eb83" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362293 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f305d3-533d-46ac-9db3-fd55e864eb83" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: E0226 08:39:05.362310 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeddba64-b6cc-4ae0-8c09-8252931e1778" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362317 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeddba64-b6cc-4ae0-8c09-8252931e1778" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: E0226 08:39:05.362333 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c980da-2e8f-4979-b5a8-760039e24ea8" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362339 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c980da-2e8f-4979-b5a8-760039e24ea8" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: E0226 08:39:05.362352 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362358 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: E0226 08:39:05.362400 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362408 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362650 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f305d3-533d-46ac-9db3-fd55e864eb83" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362665 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeddba64-b6cc-4ae0-8c09-8252931e1778" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362675 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362688 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" containerName="mariadb-database-create" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.362701 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c980da-2e8f-4979-b5a8-760039e24ea8" containerName="mariadb-account-create-update" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.363700 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.375232 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.375332 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7wdvc" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.380568 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wz7kr"] Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.440880 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.440950 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.441019 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.441059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnt5\" (UniqueName: \"kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.502694 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.544456 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.544537 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.544663 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.544747 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnt5\" (UniqueName: \"kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.583361 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.588805 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.605910 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.614506 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnt5\" (UniqueName: \"kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5\") pod \"glance-db-sync-wz7kr\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.650123 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkk9x\" (UniqueName: \"kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x\") pod \"e00378f6-b3d9-40f7-889c-a6cce27718c4\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.650784 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts\") pod \"e00378f6-b3d9-40f7-889c-a6cce27718c4\" (UID: \"e00378f6-b3d9-40f7-889c-a6cce27718c4\") " Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.652759 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e00378f6-b3d9-40f7-889c-a6cce27718c4" (UID: "e00378f6-b3d9-40f7-889c-a6cce27718c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.668460 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x" (OuterVolumeSpecName: "kube-api-access-rkk9x") pod "e00378f6-b3d9-40f7-889c-a6cce27718c4" (UID: "e00378f6-b3d9-40f7-889c-a6cce27718c4"). InnerVolumeSpecName "kube-api-access-rkk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.754211 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00378f6-b3d9-40f7-889c-a6cce27718c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.754249 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkk9x\" (UniqueName: \"kubernetes.io/projected/e00378f6-b3d9-40f7-889c-a6cce27718c4-kube-api-access-rkk9x\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.881739 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wz7kr" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.981815 4741 generic.go:334] "Generic (PLEG): container finished" podID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerID="cff3e36d89d4668265be9293420d4fb1167b136812e855b906ffbbe078fd1f09" exitCode=0 Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.986429 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.990612 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:05 crc kubenswrapper[4741]: I0226 08:39:05.998101 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerDied","Data":"cff3e36d89d4668265be9293420d4fb1167b136812e855b906ffbbe078fd1f09"} Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.017479 4741 generic.go:334] "Generic (PLEG): container finished" podID="f76a21fd-2727-4fa8-acfc-02fc255c2d1f" containerID="d78751a0e72f35abb6bc147d483c0ff3c66ce29e5d4203c12e0edceebcab8a95" exitCode=0 Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.017587 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-stlzj-config-84ljd" event={"ID":"f76a21fd-2727-4fa8-acfc-02fc255c2d1f","Type":"ContainerDied","Data":"d78751a0e72f35abb6bc147d483c0ff3c66ce29e5d4203c12e0edceebcab8a95"} Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.032338 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rxbw" event={"ID":"1aa2ef24-5972-42d6-b38e-adaef893b130","Type":"ContainerDied","Data":"7ffbaff6e7ebf69ee8e315fa625948b7e958f5808b353179e3cfcc5b9b87e7be"} Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.032416 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ffbaff6e7ebf69ee8e315fa625948b7e958f5808b353179e3cfcc5b9b87e7be" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.032525 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rxbw" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.050025 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t7rkz" event={"ID":"10ea5eb4-4f56-417f-84fd-5ae940e74516","Type":"ContainerDied","Data":"c21e683f566828af44518b4f575ed232daac8ace1edce1865551c83b96111d30"} Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.050079 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c21e683f566828af44518b4f575ed232daac8ace1edce1865551c83b96111d30" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.050268 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t7rkz" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.095772 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2x52\" (UniqueName: \"kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52\") pod \"10ea5eb4-4f56-417f-84fd-5ae940e74516\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.095960 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts\") pod \"10ea5eb4-4f56-417f-84fd-5ae940e74516\" (UID: \"10ea5eb4-4f56-417f-84fd-5ae940e74516\") " Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.100466 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52" (OuterVolumeSpecName: "kube-api-access-d2x52") pod "10ea5eb4-4f56-417f-84fd-5ae940e74516" (UID: "10ea5eb4-4f56-417f-84fd-5ae940e74516"). InnerVolumeSpecName "kube-api-access-d2x52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.103081 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10ea5eb4-4f56-417f-84fd-5ae940e74516" (UID: "10ea5eb4-4f56-417f-84fd-5ae940e74516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.116651 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-abee-account-create-update-jg7s7" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.117762 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-abee-account-create-update-jg7s7" event={"ID":"e00378f6-b3d9-40f7-889c-a6cce27718c4","Type":"ContainerDied","Data":"13c239a93ed35f873614b99c16dac86d28418f403ea20c4de7fbf48050e22ef8"} Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.117790 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c239a93ed35f873614b99c16dac86d28418f403ea20c4de7fbf48050e22ef8" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.198831 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv2cw\" (UniqueName: \"kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw\") pod \"1aa2ef24-5972-42d6-b38e-adaef893b130\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.198974 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts\") pod \"1aa2ef24-5972-42d6-b38e-adaef893b130\" (UID: \"1aa2ef24-5972-42d6-b38e-adaef893b130\") " Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.199888 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ea5eb4-4f56-417f-84fd-5ae940e74516-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.199907 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2x52\" (UniqueName: \"kubernetes.io/projected/10ea5eb4-4f56-417f-84fd-5ae940e74516-kube-api-access-d2x52\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.200852 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1aa2ef24-5972-42d6-b38e-adaef893b130" (UID: "1aa2ef24-5972-42d6-b38e-adaef893b130"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.211925 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw" (OuterVolumeSpecName: "kube-api-access-vv2cw") pod "1aa2ef24-5972-42d6-b38e-adaef893b130" (UID: "1aa2ef24-5972-42d6-b38e-adaef893b130"). InnerVolumeSpecName "kube-api-access-vv2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.304348 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv2cw\" (UniqueName: \"kubernetes.io/projected/1aa2ef24-5972-42d6-b38e-adaef893b130-kube-api-access-vv2cw\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.304394 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa2ef24-5972-42d6-b38e-adaef893b130-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.445050 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-stlzj" Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.780241 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wz7kr"] Feb 26 08:39:06 crc kubenswrapper[4741]: I0226 08:39:06.898186 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.031055 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts\") pod \"93463751-5c16-4d33-abad-392b566eef58\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.031559 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2mpf\" (UniqueName: \"kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf\") pod \"93463751-5c16-4d33-abad-392b566eef58\" (UID: \"93463751-5c16-4d33-abad-392b566eef58\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.037541 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93463751-5c16-4d33-abad-392b566eef58" (UID: "93463751-5c16-4d33-abad-392b566eef58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.042241 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf" (OuterVolumeSpecName: "kube-api-access-g2mpf") pod "93463751-5c16-4d33-abad-392b566eef58" (UID: "93463751-5c16-4d33-abad-392b566eef58"). InnerVolumeSpecName "kube-api-access-g2mpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.136045 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerStarted","Data":"de03aacc60568608b6c772c583bc76aee35b4a0e11eb6e238f39d2c36de52133"} Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.137897 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2mpf\" (UniqueName: \"kubernetes.io/projected/93463751-5c16-4d33-abad-392b566eef58-kube-api-access-g2mpf\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.137942 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93463751-5c16-4d33-abad-392b566eef58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.139865 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wz7kr" event={"ID":"2325772f-698b-4597-8918-0f46f598545e","Type":"ContainerStarted","Data":"8e8ac8f4e5b05b7c6e9145c660c1005baedbffe10147ae4f25ddb35c74939b14"} Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.143046 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-26f9-account-create-update-dsrjk" event={"ID":"93463751-5c16-4d33-abad-392b566eef58","Type":"ContainerDied","Data":"dc74a49511cbeb94e0a667561321d7704b9213cc2b84ec4c922efe1ec9537610"} Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.143093 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc74a49511cbeb94e0a667561321d7704b9213cc2b84ec4c922efe1ec9537610" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.143264 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-26f9-account-create-update-dsrjk" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.145339 4741 generic.go:334] "Generic (PLEG): container finished" podID="548f1177-df4c-4b50-920f-f5b9ff95c283" containerID="2974879b5e173841f919882fcee2e3848b7b48261d59cae34e225a0bb0cb1e94" exitCode=0 Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.145606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xwx84" event={"ID":"548f1177-df4c-4b50-920f-f5b9ff95c283","Type":"ContainerDied","Data":"2974879b5e173841f919882fcee2e3848b7b48261d59cae34e225a0bb0cb1e94"} Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.644067 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7"] Feb 26 08:39:07 crc kubenswrapper[4741]: E0226 08:39:07.644849 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ea5eb4-4f56-417f-84fd-5ae940e74516" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.644878 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ea5eb4-4f56-417f-84fd-5ae940e74516" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: E0226 08:39:07.644891 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00378f6-b3d9-40f7-889c-a6cce27718c4" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.644900 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00378f6-b3d9-40f7-889c-a6cce27718c4" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: E0226 08:39:07.644937 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa2ef24-5972-42d6-b38e-adaef893b130" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.644947 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa2ef24-5972-42d6-b38e-adaef893b130" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: E0226 08:39:07.644978 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93463751-5c16-4d33-abad-392b566eef58" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.644988 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="93463751-5c16-4d33-abad-392b566eef58" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.645280 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ea5eb4-4f56-417f-84fd-5ae940e74516" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.645305 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa2ef24-5972-42d6-b38e-adaef893b130" containerName="mariadb-database-create" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.645337 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00378f6-b3d9-40f7-889c-a6cce27718c4" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.645351 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="93463751-5c16-4d33-abad-392b566eef58" containerName="mariadb-account-create-update" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.646658 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.658070 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.662047 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7"] Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.755695 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.755813 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrt4\" (UniqueName: \"kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.755876 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.755940 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756083 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756068 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756195 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run" (OuterVolumeSpecName: "var-run") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756250 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts\") pod \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\" (UID: \"f76a21fd-2727-4fa8-acfc-02fc255c2d1f\") " Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756343 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.756893 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757187 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757203 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtp75\" (UniqueName: \"kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757314 4741 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757329 4741 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757339 4741 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757355 4741 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.757537 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts" (OuterVolumeSpecName: "scripts") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.769269 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4" (OuterVolumeSpecName: "kube-api-access-lzrt4") pod "f76a21fd-2727-4fa8-acfc-02fc255c2d1f" (UID: "f76a21fd-2727-4fa8-acfc-02fc255c2d1f"). InnerVolumeSpecName "kube-api-access-lzrt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.772843 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-be7b-account-create-update-v5l4z"] Feb 26 08:39:07 crc kubenswrapper[4741]: E0226 08:39:07.773654 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76a21fd-2727-4fa8-acfc-02fc255c2d1f" containerName="ovn-config" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.773674 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76a21fd-2727-4fa8-acfc-02fc255c2d1f" containerName="ovn-config" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.774406 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76a21fd-2727-4fa8-acfc-02fc255c2d1f" containerName="ovn-config" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.776304 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.779867 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.785601 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.834246 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-be7b-account-create-update-v5l4z"] Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.873269 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.873854 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.876175 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtp75\" (UniqueName: \"kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.876354 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7gn\" (UniqueName: \"kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.880303 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.893688 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.895705 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrt4\" (UniqueName: \"kubernetes.io/projected/f76a21fd-2727-4fa8-acfc-02fc255c2d1f-kube-api-access-lzrt4\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.937669 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtp75\" (UniqueName: \"kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75\") pod \"mysqld-exporter-openstack-cell1-db-create-gmjx7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.993872 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:07 crc kubenswrapper[4741]: I0226 08:39:07.999313 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw7gn\" (UniqueName: \"kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:07.999790 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.001494 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.070279 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw7gn\" (UniqueName: \"kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn\") pod \"mysqld-exporter-be7b-account-create-update-v5l4z\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.150704 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.182248 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-stlzj-config-84ljd"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.205792 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-stlzj-config-84ljd"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.224496 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.254015 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-stlzj-config-84ljd" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.254235 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5328162e7f74fd7e8e95da9c37d02a2b6d82ead0f7ce4c4bce6c450fda037143" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.540387 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-xlrfl"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.544728 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.556790 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xlrfl"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.619915 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.619996 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjq77\" (UniqueName: \"kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.722064 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjq77\" (UniqueName: \"kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.735668 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-833b-account-create-update-nq9n4"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.737444 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.747911 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.749422 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.749861 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.776065 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v7nc2"] Feb 26 08:39:08 crc kubenswrapper[4741]: I0226 08:39:08.778776 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.842846 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7nc2"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.843866 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjq77\" (UniqueName: \"kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77\") pod \"heat-db-create-xlrfl\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.862026 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjkq\" (UniqueName: \"kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.862564 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.878684 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.878887 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcbq\" (UniqueName: \"kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.879379 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-833b-account-create-update-nq9n4"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.982253 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjkq\" (UniqueName: \"kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.982329 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.982469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.982527 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcbq\" (UniqueName: \"kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.993839 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:08.996353 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.049018 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcbq\" (UniqueName: \"kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq\") pod \"heat-833b-account-create-update-nq9n4\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.070308 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjkq\" (UniqueName: \"kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq\") pod \"cinder-db-create-v7nc2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.096239 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-37f8-account-create-update-pmrwl"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.098732 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.122920 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37f8-account-create-update-pmrwl"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.128149 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.176725 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rzqlp"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.183867 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.206086 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glzj\" (UniqueName: \"kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.206209 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.306168 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rzqlp"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.308441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpq87\" (UniqueName: \"kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.308505 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.308574 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.308725 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glzj\" (UniqueName: \"kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.310532 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.323197 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-dn6fj"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.331014 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.334890 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dn6fj"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.475744 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.547632 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glzj\" (UniqueName: \"kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj\") pod \"cinder-37f8-account-create-update-pmrwl\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.559844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpq87\" (UniqueName: \"kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.564920 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.565258 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bxh\" (UniqueName: \"kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.565319 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.567484 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.569084 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.644235 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.672826 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bxh\" (UniqueName: \"kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.672886 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.673958 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.712467 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfe8-account-create-update-96sxz"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.712770 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpq87\" (UniqueName: \"kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87\") pod \"neutron-db-create-rzqlp\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.714385 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.720466 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.721294 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bxh\" (UniqueName: \"kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh\") pod \"barbican-db-create-dn6fj\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.775498 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h96l\" (UniqueName: \"kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.775681 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.852695 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76a21fd-2727-4fa8-acfc-02fc255c2d1f" path="/var/lib/kubelet/pods/f76a21fd-2727-4fa8-acfc-02fc255c2d1f/volumes" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.856894 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfe8-account-create-update-96sxz"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.880125 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h96l\" (UniqueName: \"kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.880209 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.880952 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7114-account-create-update-jdvp4"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.881497 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.882732 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.888620 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.906915 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7114-account-create-update-jdvp4"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.948806 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-be7b-account-create-update-v5l4z"] Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.982355 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nwl\" (UniqueName: \"kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:09 crc kubenswrapper[4741]: I0226 08:39:09.982481 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.017038 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h96l\" (UniqueName: \"kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l\") pod \"neutron-bfe8-account-create-update-96sxz\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.085361 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.085620 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6nwl\" (UniqueName: \"kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.086520 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.104929 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6nwl\" (UniqueName: \"kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl\") pod \"barbican-7114-account-create-update-jdvp4\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.241148 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.314000 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xwx84" event={"ID":"548f1177-df4c-4b50-920f-f5b9ff95c283","Type":"ContainerDied","Data":"5db09ebc684ded5a0d31745fb6382d187774b7210ef84e4ea59e8333717d4d53"} Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.314058 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db09ebc684ded5a0d31745fb6382d187774b7210ef84e4ea59e8333717d4d53" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.346398 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" event={"ID":"d613263d-cc5a-4d4b-8327-cb8a3faec8a7","Type":"ContainerStarted","Data":"4fd03e755d41bcbcf9d45f24a59a8e2a27f2ec0afe32470951a84814d183026f"} Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.375740 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" event={"ID":"9c52b481-9036-4a56-a248-30b506dd1bea","Type":"ContainerStarted","Data":"31ea6ec312650b1992e3a82c7e9f90fa0341eef67afe428d31c5dd8e9d37ec0d"} Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.470446 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.472775 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.544802 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.590641 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.606551 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.631948 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.716434 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75pnb\" (UniqueName: \"kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.716941 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.717283 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.717389 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.717476 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.717598 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.717800 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts\") pod \"548f1177-df4c-4b50-920f-f5b9ff95c283\" (UID: \"548f1177-df4c-4b50-920f-f5b9ff95c283\") " Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.721838 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.722854 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.730933 4741 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/548f1177-df4c-4b50-920f-f5b9ff95c283-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.731210 4741 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.743647 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-833b-account-create-update-nq9n4"] Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.778988 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb" (OuterVolumeSpecName: "kube-api-access-75pnb") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "kube-api-access-75pnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.798505 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.842665 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75pnb\" (UniqueName: \"kubernetes.io/projected/548f1177-df4c-4b50-920f-f5b9ff95c283-kube-api-access-75pnb\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.842708 4741 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.880156 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts" (OuterVolumeSpecName: "scripts") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.900737 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.924214 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "548f1177-df4c-4b50-920f-f5b9ff95c283" (UID: "548f1177-df4c-4b50-920f-f5b9ff95c283"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.946544 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.946587 4741 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/548f1177-df4c-4b50-920f-f5b9ff95c283-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:10 crc kubenswrapper[4741]: I0226 08:39:10.946599 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/548f1177-df4c-4b50-920f-f5b9ff95c283-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.133774 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xlrfl"] Feb 26 08:39:11 crc kubenswrapper[4741]: W0226 08:39:11.183183 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31535f9_ac90_4d2b_bcc1_445fc2abc892.slice/crio-9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95 WatchSource:0}: Error finding container 9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95: Status 404 returned error can't find the container with id 9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95 Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.358959 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37f8-account-create-update-pmrwl"] Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.478525 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9zc9f"] Feb 26 08:39:11 crc kubenswrapper[4741]: E0226 08:39:11.482286 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f1177-df4c-4b50-920f-f5b9ff95c283" containerName="swift-ring-rebalance" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.482323 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f1177-df4c-4b50-920f-f5b9ff95c283" containerName="swift-ring-rebalance" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.487289 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="548f1177-df4c-4b50-920f-f5b9ff95c283" containerName="swift-ring-rebalance" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.488844 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.502813 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.511821 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.517685 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.518289 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4srwh" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.589739 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-833b-account-create-update-nq9n4" event={"ID":"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19","Type":"ContainerStarted","Data":"295b6847589143da5c7d4830ca2e89d8cba079282b155deb6395c2612984b1bb"} Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.601926 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.602197 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2wt\" (UniqueName: \"kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.602365 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.637885 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xlrfl" event={"ID":"f31535f9-ac90-4d2b-bcc1-445fc2abc892","Type":"ContainerStarted","Data":"9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95"} Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.673539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" event={"ID":"d613263d-cc5a-4d4b-8327-cb8a3faec8a7","Type":"ContainerStarted","Data":"c3b3b68ab49373e482a2051ddcb2811aa704dc35f5465b979f0925e40661363f"} Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.707910 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9zc9f"] Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.716649 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" event={"ID":"9c52b481-9036-4a56-a248-30b506dd1bea","Type":"ContainerStarted","Data":"a2222b0246e682d4a24195ec683d19fca7742be29c67c82c0178b9076de830ee"} Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.740539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerStarted","Data":"3babfe96d0e64f23dbf653df78f9297a208b2b5703dd2b3defe418da81c56e14"} Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.740682 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xwx84" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.746575 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.746812 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2wt\" (UniqueName: \"kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.746972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.764596 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.776843 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.844057 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2wt\" (UniqueName: \"kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt\") pod \"keystone-db-sync-9zc9f\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.907266 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" podStartSLOduration=4.907232703 podStartE2EDuration="4.907232703s" podCreationTimestamp="2026-02-26 08:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:11.731063307 +0000 UTC m=+1586.727000704" watchObservedRunningTime="2026-02-26 08:39:11.907232703 +0000 UTC m=+1586.903170090" Feb 26 08:39:11 crc kubenswrapper[4741]: I0226 08:39:11.934241 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" podStartSLOduration=4.934215179 podStartE2EDuration="4.934215179s" podCreationTimestamp="2026-02-26 08:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:11.768235221 +0000 UTC m=+1586.764172608" watchObservedRunningTime="2026-02-26 08:39:11.934215179 +0000 UTC m=+1586.930152566" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.003584 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dn6fj"] Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.006574 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.506925 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rzqlp"] Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.514617 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfe8-account-create-update-96sxz"] Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.602294 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7nc2"] Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.625392 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7114-account-create-update-jdvp4"] Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.796835 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7nc2" event={"ID":"b16704f2-d6ef-4c31-b3a8-533129c97ec2","Type":"ContainerStarted","Data":"5169b86fae339d87f86f143413b0d35afef69de7e5c6bfba5740456f92ed8118"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.812536 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"24120f9b-9d9b-4783-9dd9-2450215d3d26","Type":"ContainerStarted","Data":"c56185c9a8dfc20ba145dcea835f99d8111004c2957f375c30dd7c70bad37bc0"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.823334 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7114-account-create-update-jdvp4" event={"ID":"cf167f51-9893-44e5-99f4-9841055d2e1b","Type":"ContainerStarted","Data":"d005ed6e82d908e2408755f5eaa0e8f8b35982fd50866a500c1a27fcde671f70"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.828644 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37f8-account-create-update-pmrwl" event={"ID":"76abc98c-8108-43d6-b219-d8a228ee9de1","Type":"ContainerStarted","Data":"84be32f16f1941d4eabe835f92db1ee80e3917dfe1322f3463776406bf3ee641"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.828708 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37f8-account-create-update-pmrwl" event={"ID":"76abc98c-8108-43d6-b219-d8a228ee9de1","Type":"ContainerStarted","Data":"94980164945af1a5a519abc2740139fb971fdb4be2e1d17524bb18c657f118ba"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.839889 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dn6fj" event={"ID":"adabc55e-a269-495c-9d28-d8da64354f35","Type":"ContainerStarted","Data":"7656b7a6f6b9fb335275da7409262283af24f037d235d46beab4108f7795443b"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.857174 4741 generic.go:334] "Generic (PLEG): container finished" podID="d613263d-cc5a-4d4b-8327-cb8a3faec8a7" containerID="c3b3b68ab49373e482a2051ddcb2811aa704dc35f5465b979f0925e40661363f" exitCode=0 Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.857326 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" event={"ID":"d613263d-cc5a-4d4b-8327-cb8a3faec8a7","Type":"ContainerDied","Data":"c3b3b68ab49373e482a2051ddcb2811aa704dc35f5465b979f0925e40661363f"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.866723 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.866691387 podStartE2EDuration="20.866691387s" podCreationTimestamp="2026-02-26 08:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:12.857970319 +0000 UTC m=+1587.853907726" watchObservedRunningTime="2026-02-26 08:39:12.866691387 +0000 UTC m=+1587.862628784" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.887540 4741 generic.go:334] "Generic (PLEG): container finished" podID="9c52b481-9036-4a56-a248-30b506dd1bea" containerID="a2222b0246e682d4a24195ec683d19fca7742be29c67c82c0178b9076de830ee" exitCode=0 Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.887656 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" event={"ID":"9c52b481-9036-4a56-a248-30b506dd1bea","Type":"ContainerDied","Data":"a2222b0246e682d4a24195ec683d19fca7742be29c67c82c0178b9076de830ee"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.894238 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-37f8-account-create-update-pmrwl" podStartSLOduration=4.894210377 podStartE2EDuration="4.894210377s" podCreationTimestamp="2026-02-26 08:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:12.887931449 +0000 UTC m=+1587.883868836" watchObservedRunningTime="2026-02-26 08:39:12.894210377 +0000 UTC m=+1587.890147764" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.902559 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rzqlp" event={"ID":"275147ec-fc05-4ce6-92e7-f9ed21d8b85a","Type":"ContainerStarted","Data":"7e31317f6fbebee87e7f1d22d22af2fc5519cf0fe3ed69537a683d7e12cda01f"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.917400 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfe8-account-create-update-96sxz" event={"ID":"eb214ad8-a870-4561-9c32-27c7b3943839","Type":"ContainerStarted","Data":"511ebccb909e27f27f71d4f15dbb7389e0020bc077b9d0d9885ed5980bfa0e6c"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.928834 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-833b-account-create-update-nq9n4" event={"ID":"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19","Type":"ContainerStarted","Data":"b661443ad9f0e8c7b58f9674d4034ca1a0b8e81b4e6c7814813f04e9069f6e34"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.947744 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xlrfl" event={"ID":"f31535f9-ac90-4d2b-bcc1-445fc2abc892","Type":"ContainerStarted","Data":"28104f18c3c62903044707c0234661e5649c83daf3d8baf2e3e114c6c1be0b6f"} Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.966373 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.971440 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-833b-account-create-update-nq9n4" podStartSLOduration=4.971414027 podStartE2EDuration="4.971414027s" podCreationTimestamp="2026-02-26 08:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:12.957305287 +0000 UTC m=+1587.953242674" watchObservedRunningTime="2026-02-26 08:39:12.971414027 +0000 UTC m=+1587.967351414" Feb 26 08:39:12 crc kubenswrapper[4741]: I0226 08:39:12.991548 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-xlrfl" podStartSLOduration=4.991510137 podStartE2EDuration="4.991510137s" podCreationTimestamp="2026-02-26 08:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:12.981836803 +0000 UTC m=+1587.977774190" watchObservedRunningTime="2026-02-26 08:39:12.991510137 +0000 UTC m=+1587.987447544" Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.196879 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9zc9f"] Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.968761 4741 generic.go:334] "Generic (PLEG): container finished" podID="f31535f9-ac90-4d2b-bcc1-445fc2abc892" containerID="28104f18c3c62903044707c0234661e5649c83daf3d8baf2e3e114c6c1be0b6f" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.968877 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xlrfl" event={"ID":"f31535f9-ac90-4d2b-bcc1-445fc2abc892","Type":"ContainerDied","Data":"28104f18c3c62903044707c0234661e5649c83daf3d8baf2e3e114c6c1be0b6f"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.972722 4741 generic.go:334] "Generic (PLEG): container finished" podID="adabc55e-a269-495c-9d28-d8da64354f35" containerID="583f330f0940479ac4b6a998fa06866efa22ea82372cbf445987db0fe0a40767" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.972848 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dn6fj" event={"ID":"adabc55e-a269-495c-9d28-d8da64354f35","Type":"ContainerDied","Data":"583f330f0940479ac4b6a998fa06866efa22ea82372cbf445987db0fe0a40767"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.975781 4741 generic.go:334] "Generic (PLEG): container finished" podID="b16704f2-d6ef-4c31-b3a8-533129c97ec2" containerID="a35481248c39820381d946da9a88876af06d587c223b7d733121f5b9afdff07c" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.975987 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7nc2" event={"ID":"b16704f2-d6ef-4c31-b3a8-533129c97ec2","Type":"ContainerDied","Data":"a35481248c39820381d946da9a88876af06d587c223b7d733121f5b9afdff07c"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.978521 4741 generic.go:334] "Generic (PLEG): container finished" podID="76abc98c-8108-43d6-b219-d8a228ee9de1" containerID="84be32f16f1941d4eabe835f92db1ee80e3917dfe1322f3463776406bf3ee641" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.978587 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37f8-account-create-update-pmrwl" event={"ID":"76abc98c-8108-43d6-b219-d8a228ee9de1","Type":"ContainerDied","Data":"84be32f16f1941d4eabe835f92db1ee80e3917dfe1322f3463776406bf3ee641"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.980504 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9zc9f" event={"ID":"91289eb1-fb29-4b09-9f36-1f5d250f6b39","Type":"ContainerStarted","Data":"01116fe1b9ba473681091d6e333f78c1246d8771085eb2ff694e94e50c1e2d83"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.982229 4741 generic.go:334] "Generic (PLEG): container finished" podID="275147ec-fc05-4ce6-92e7-f9ed21d8b85a" containerID="ef21e7636cab28e61207dd6582b54bfbcf4373f8d2f6002be32f4e3e1854b0c3" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.982289 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rzqlp" event={"ID":"275147ec-fc05-4ce6-92e7-f9ed21d8b85a","Type":"ContainerDied","Data":"ef21e7636cab28e61207dd6582b54bfbcf4373f8d2f6002be32f4e3e1854b0c3"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.986056 4741 generic.go:334] "Generic (PLEG): container finished" podID="eb214ad8-a870-4561-9c32-27c7b3943839" containerID="a0bad3a1c8211b027df83c6aa6b97e1824bb3534c3d7b1ff87e6dca6e08fb0a3" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.986157 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfe8-account-create-update-96sxz" event={"ID":"eb214ad8-a870-4561-9c32-27c7b3943839","Type":"ContainerDied","Data":"a0bad3a1c8211b027df83c6aa6b97e1824bb3534c3d7b1ff87e6dca6e08fb0a3"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.989595 4741 generic.go:334] "Generic (PLEG): container finished" podID="cf167f51-9893-44e5-99f4-9841055d2e1b" containerID="2e1bac17917a1dd3ce17c4305364665f8cd7f86139ca5fc6371cd98d091a61ea" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.989666 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7114-account-create-update-jdvp4" event={"ID":"cf167f51-9893-44e5-99f4-9841055d2e1b","Type":"ContainerDied","Data":"2e1bac17917a1dd3ce17c4305364665f8cd7f86139ca5fc6371cd98d091a61ea"} Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.999057 4741 generic.go:334] "Generic (PLEG): container finished" podID="e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" containerID="b661443ad9f0e8c7b58f9674d4034ca1a0b8e81b4e6c7814813f04e9069f6e34" exitCode=0 Feb 26 08:39:13 crc kubenswrapper[4741]: I0226 08:39:13.999436 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-833b-account-create-update-nq9n4" event={"ID":"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19","Type":"ContainerDied","Data":"b661443ad9f0e8c7b58f9674d4034ca1a0b8e81b4e6c7814813f04e9069f6e34"} Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.060437 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:39:14 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:39:14 crc kubenswrapper[4741]: > Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.639753 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.653123 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.719430 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts\") pod \"9c52b481-9036-4a56-a248-30b506dd1bea\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.720520 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c52b481-9036-4a56-a248-30b506dd1bea" (UID: "9c52b481-9036-4a56-a248-30b506dd1bea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.720540 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d613263d-cc5a-4d4b-8327-cb8a3faec8a7" (UID: "d613263d-cc5a-4d4b-8327-cb8a3faec8a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.720579 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts\") pod \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.720743 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw7gn\" (UniqueName: \"kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn\") pod \"9c52b481-9036-4a56-a248-30b506dd1bea\" (UID: \"9c52b481-9036-4a56-a248-30b506dd1bea\") " Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.720889 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtp75\" (UniqueName: \"kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75\") pod \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\" (UID: \"d613263d-cc5a-4d4b-8327-cb8a3faec8a7\") " Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.722788 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c52b481-9036-4a56-a248-30b506dd1bea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.722819 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.736051 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75" (OuterVolumeSpecName: "kube-api-access-gtp75") pod "d613263d-cc5a-4d4b-8327-cb8a3faec8a7" (UID: "d613263d-cc5a-4d4b-8327-cb8a3faec8a7"). InnerVolumeSpecName "kube-api-access-gtp75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.737878 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn" (OuterVolumeSpecName: "kube-api-access-tw7gn") pod "9c52b481-9036-4a56-a248-30b506dd1bea" (UID: "9c52b481-9036-4a56-a248-30b506dd1bea"). InnerVolumeSpecName "kube-api-access-tw7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.826674 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw7gn\" (UniqueName: \"kubernetes.io/projected/9c52b481-9036-4a56-a248-30b506dd1bea-kube-api-access-tw7gn\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:14 crc kubenswrapper[4741]: I0226 08:39:14.826723 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtp75\" (UniqueName: \"kubernetes.io/projected/d613263d-cc5a-4d4b-8327-cb8a3faec8a7-kube-api-access-gtp75\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.016587 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.016575 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7" event={"ID":"d613263d-cc5a-4d4b-8327-cb8a3faec8a7","Type":"ContainerDied","Data":"4fd03e755d41bcbcf9d45f24a59a8e2a27f2ec0afe32470951a84814d183026f"} Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.016730 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd03e755d41bcbcf9d45f24a59a8e2a27f2ec0afe32470951a84814d183026f" Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.022545 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.023595 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-be7b-account-create-update-v5l4z" event={"ID":"9c52b481-9036-4a56-a248-30b506dd1bea","Type":"ContainerDied","Data":"31ea6ec312650b1992e3a82c7e9f90fa0341eef67afe428d31c5dd8e9d37ec0d"} Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.023687 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ea6ec312650b1992e3a82c7e9f90fa0341eef67afe428d31c5dd8e9d37ec0d" Feb 26 08:39:15 crc kubenswrapper[4741]: I0226 08:39:15.999288 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.014099 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.065748 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-833b-account-create-update-nq9n4" event={"ID":"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19","Type":"ContainerDied","Data":"295b6847589143da5c7d4830ca2e89d8cba079282b155deb6395c2612984b1bb"} Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.065829 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295b6847589143da5c7d4830ca2e89d8cba079282b155deb6395c2612984b1bb" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.065960 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-833b-account-create-update-nq9n4" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.068293 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37f8-account-create-update-pmrwl" event={"ID":"76abc98c-8108-43d6-b219-d8a228ee9de1","Type":"ContainerDied","Data":"94980164945af1a5a519abc2740139fb971fdb4be2e1d17524bb18c657f118ba"} Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.068350 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94980164945af1a5a519abc2740139fb971fdb4be2e1d17524bb18c657f118ba" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.068372 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37f8-account-create-update-pmrwl" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.122209 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts\") pod \"76abc98c-8108-43d6-b219-d8a228ee9de1\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.122352 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts\") pod \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.122808 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pcbq\" (UniqueName: \"kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq\") pod \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\" (UID: \"e2f7ea40-66e5-4509-a8aa-4fb66d51ca19\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.122993 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glzj\" (UniqueName: \"kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj\") pod \"76abc98c-8108-43d6-b219-d8a228ee9de1\" (UID: \"76abc98c-8108-43d6-b219-d8a228ee9de1\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.123951 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" (UID: "e2f7ea40-66e5-4509-a8aa-4fb66d51ca19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.125307 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76abc98c-8108-43d6-b219-d8a228ee9de1" (UID: "76abc98c-8108-43d6-b219-d8a228ee9de1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.152504 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq" (OuterVolumeSpecName: "kube-api-access-4pcbq") pod "e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" (UID: "e2f7ea40-66e5-4509-a8aa-4fb66d51ca19"). InnerVolumeSpecName "kube-api-access-4pcbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.162573 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj" (OuterVolumeSpecName: "kube-api-access-2glzj") pod "76abc98c-8108-43d6-b219-d8a228ee9de1" (UID: "76abc98c-8108-43d6-b219-d8a228ee9de1"). InnerVolumeSpecName "kube-api-access-2glzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.226772 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pcbq\" (UniqueName: \"kubernetes.io/projected/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-kube-api-access-4pcbq\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.226816 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glzj\" (UniqueName: \"kubernetes.io/projected/76abc98c-8108-43d6-b219-d8a228ee9de1-kube-api-access-2glzj\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.226826 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76abc98c-8108-43d6-b219-d8a228ee9de1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.226836 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.454699 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.522225 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.538837 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts\") pod \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.539319 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twjkq\" (UniqueName: \"kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq\") pod \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\" (UID: \"b16704f2-d6ef-4c31-b3a8-533129c97ec2\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.539599 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b16704f2-d6ef-4c31-b3a8-533129c97ec2" (UID: "b16704f2-d6ef-4c31-b3a8-533129c97ec2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.540255 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b16704f2-d6ef-4c31-b3a8-533129c97ec2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.642169 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts\") pod \"cf167f51-9893-44e5-99f4-9841055d2e1b\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.642393 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6nwl\" (UniqueName: \"kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl\") pod \"cf167f51-9893-44e5-99f4-9841055d2e1b\" (UID: \"cf167f51-9893-44e5-99f4-9841055d2e1b\") " Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.643108 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf167f51-9893-44e5-99f4-9841055d2e1b" (UID: "cf167f51-9893-44e5-99f4-9841055d2e1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.707807 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl" (OuterVolumeSpecName: "kube-api-access-f6nwl") pod "cf167f51-9893-44e5-99f4-9841055d2e1b" (UID: "cf167f51-9893-44e5-99f4-9841055d2e1b"). InnerVolumeSpecName "kube-api-access-f6nwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.707970 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq" (OuterVolumeSpecName: "kube-api-access-twjkq") pod "b16704f2-d6ef-4c31-b3a8-533129c97ec2" (UID: "b16704f2-d6ef-4c31-b3a8-533129c97ec2"). InnerVolumeSpecName "kube-api-access-twjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.753278 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf167f51-9893-44e5-99f4-9841055d2e1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.753338 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twjkq\" (UniqueName: \"kubernetes.io/projected/b16704f2-d6ef-4c31-b3a8-533129c97ec2-kube-api-access-twjkq\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:16 crc kubenswrapper[4741]: I0226 08:39:16.753354 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6nwl\" (UniqueName: \"kubernetes.io/projected/cf167f51-9893-44e5-99f4-9841055d2e1b-kube-api-access-f6nwl\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.092664 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.098501 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.099348 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7nc2" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.099499 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7nc2" event={"ID":"b16704f2-d6ef-4c31-b3a8-533129c97ec2","Type":"ContainerDied","Data":"5169b86fae339d87f86f143413b0d35afef69de7e5c6bfba5740456f92ed8118"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.099555 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5169b86fae339d87f86f143413b0d35afef69de7e5c6bfba5740456f92ed8118" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.104904 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7114-account-create-update-jdvp4" event={"ID":"cf167f51-9893-44e5-99f4-9841055d2e1b","Type":"ContainerDied","Data":"d005ed6e82d908e2408755f5eaa0e8f8b35982fd50866a500c1a27fcde671f70"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.104938 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7114-account-create-update-jdvp4" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.104958 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d005ed6e82d908e2408755f5eaa0e8f8b35982fd50866a500c1a27fcde671f70" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.114564 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.118949 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xlrfl" event={"ID":"f31535f9-ac90-4d2b-bcc1-445fc2abc892","Type":"ContainerDied","Data":"9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.119001 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9736a768f3e96f18691bd92d5c1a24f7c0fcddfaef610505105fb1d7bd49cd95" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.119073 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xlrfl" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.142081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dn6fj" event={"ID":"adabc55e-a269-495c-9d28-d8da64354f35","Type":"ContainerDied","Data":"7656b7a6f6b9fb335275da7409262283af24f037d235d46beab4108f7795443b"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.142277 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7656b7a6f6b9fb335275da7409262283af24f037d235d46beab4108f7795443b" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.142432 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dn6fj" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.144510 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.145063 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rzqlp" event={"ID":"275147ec-fc05-4ce6-92e7-f9ed21d8b85a","Type":"ContainerDied","Data":"7e31317f6fbebee87e7f1d22d22af2fc5519cf0fe3ed69537a683d7e12cda01f"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.145114 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e31317f6fbebee87e7f1d22d22af2fc5519cf0fe3ed69537a683d7e12cda01f" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.145217 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rzqlp" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.163537 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfe8-account-create-update-96sxz" event={"ID":"eb214ad8-a870-4561-9c32-27c7b3943839","Type":"ContainerDied","Data":"511ebccb909e27f27f71d4f15dbb7389e0020bc077b9d0d9885ed5980bfa0e6c"} Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.163598 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511ebccb909e27f27f71d4f15dbb7389e0020bc077b9d0d9885ed5980bfa0e6c" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.163679 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfe8-account-create-update-96sxz" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164494 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpq87\" (UniqueName: \"kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87\") pod \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164572 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts\") pod \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164648 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7bxh\" (UniqueName: \"kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh\") pod \"adabc55e-a269-495c-9d28-d8da64354f35\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164794 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts\") pod \"adabc55e-a269-495c-9d28-d8da64354f35\" (UID: \"adabc55e-a269-495c-9d28-d8da64354f35\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164836 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts\") pod \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\" (UID: \"275147ec-fc05-4ce6-92e7-f9ed21d8b85a\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.164943 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjq77\" (UniqueName: \"kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77\") pod \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\" (UID: \"f31535f9-ac90-4d2b-bcc1-445fc2abc892\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.166697 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f31535f9-ac90-4d2b-bcc1-445fc2abc892" (UID: "f31535f9-ac90-4d2b-bcc1-445fc2abc892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.166848 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "275147ec-fc05-4ce6-92e7-f9ed21d8b85a" (UID: "275147ec-fc05-4ce6-92e7-f9ed21d8b85a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.167023 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adabc55e-a269-495c-9d28-d8da64354f35" (UID: "adabc55e-a269-495c-9d28-d8da64354f35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.184584 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87" (OuterVolumeSpecName: "kube-api-access-dpq87") pod "275147ec-fc05-4ce6-92e7-f9ed21d8b85a" (UID: "275147ec-fc05-4ce6-92e7-f9ed21d8b85a"). InnerVolumeSpecName "kube-api-access-dpq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.191026 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh" (OuterVolumeSpecName: "kube-api-access-q7bxh") pod "adabc55e-a269-495c-9d28-d8da64354f35" (UID: "adabc55e-a269-495c-9d28-d8da64354f35"). InnerVolumeSpecName "kube-api-access-q7bxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.192039 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77" (OuterVolumeSpecName: "kube-api-access-tjq77") pod "f31535f9-ac90-4d2b-bcc1-445fc2abc892" (UID: "f31535f9-ac90-4d2b-bcc1-445fc2abc892"). InnerVolumeSpecName "kube-api-access-tjq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.268001 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts\") pod \"eb214ad8-a870-4561-9c32-27c7b3943839\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.268177 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h96l\" (UniqueName: \"kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l\") pod \"eb214ad8-a870-4561-9c32-27c7b3943839\" (UID: \"eb214ad8-a870-4561-9c32-27c7b3943839\") " Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269299 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpq87\" (UniqueName: \"kubernetes.io/projected/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-kube-api-access-dpq87\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269327 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31535f9-ac90-4d2b-bcc1-445fc2abc892-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269340 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7bxh\" (UniqueName: \"kubernetes.io/projected/adabc55e-a269-495c-9d28-d8da64354f35-kube-api-access-q7bxh\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269354 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adabc55e-a269-495c-9d28-d8da64354f35-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269363 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/275147ec-fc05-4ce6-92e7-f9ed21d8b85a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.269373 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjq77\" (UniqueName: \"kubernetes.io/projected/f31535f9-ac90-4d2b-bcc1-445fc2abc892-kube-api-access-tjq77\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.270319 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb214ad8-a870-4561-9c32-27c7b3943839" (UID: "eb214ad8-a870-4561-9c32-27c7b3943839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.273451 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l" (OuterVolumeSpecName: "kube-api-access-2h96l") pod "eb214ad8-a870-4561-9c32-27c7b3943839" (UID: "eb214ad8-a870-4561-9c32-27c7b3943839"). InnerVolumeSpecName "kube-api-access-2h96l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.372895 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb214ad8-a870-4561-9c32-27c7b3943839-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:17 crc kubenswrapper[4741]: I0226 08:39:17.372947 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h96l\" (UniqueName: \"kubernetes.io/projected/eb214ad8-a870-4561-9c32-27c7b3943839-kube-api-access-2h96l\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.020347 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021646 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275147ec-fc05-4ce6-92e7-f9ed21d8b85a" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021673 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="275147ec-fc05-4ce6-92e7-f9ed21d8b85a" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021698 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf167f51-9893-44e5-99f4-9841055d2e1b" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021705 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf167f51-9893-44e5-99f4-9841055d2e1b" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021718 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d613263d-cc5a-4d4b-8327-cb8a3faec8a7" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021725 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d613263d-cc5a-4d4b-8327-cb8a3faec8a7" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021737 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adabc55e-a269-495c-9d28-d8da64354f35" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021744 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="adabc55e-a269-495c-9d28-d8da64354f35" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021763 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16704f2-d6ef-4c31-b3a8-533129c97ec2" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021769 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16704f2-d6ef-4c31-b3a8-533129c97ec2" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021779 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb214ad8-a870-4561-9c32-27c7b3943839" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021786 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb214ad8-a870-4561-9c32-27c7b3943839" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021799 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c52b481-9036-4a56-a248-30b506dd1bea" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021805 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c52b481-9036-4a56-a248-30b506dd1bea" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021816 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31535f9-ac90-4d2b-bcc1-445fc2abc892" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021822 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31535f9-ac90-4d2b-bcc1-445fc2abc892" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021843 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76abc98c-8108-43d6-b219-d8a228ee9de1" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021850 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="76abc98c-8108-43d6-b219-d8a228ee9de1" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: E0226 08:39:18.021863 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.021870 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026091 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d613263d-cc5a-4d4b-8327-cb8a3faec8a7" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026247 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="275147ec-fc05-4ce6-92e7-f9ed21d8b85a" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026271 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c52b481-9036-4a56-a248-30b506dd1bea" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026283 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf167f51-9893-44e5-99f4-9841055d2e1b" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026314 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31535f9-ac90-4d2b-bcc1-445fc2abc892" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026323 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16704f2-d6ef-4c31-b3a8-533129c97ec2" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026337 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="76abc98c-8108-43d6-b219-d8a228ee9de1" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026356 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="adabc55e-a269-495c-9d28-d8da64354f35" containerName="mariadb-database-create" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026367 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.026398 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb214ad8-a870-4561-9c32-27c7b3943839" containerName="mariadb-account-create-update" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.027598 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.031343 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.043497 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.124676 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.124785 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9bd\" (UniqueName: \"kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.124990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.236535 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.237006 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.237113 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9bd\" (UniqueName: \"kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.276765 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.284254 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9bd\" (UniqueName: \"kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.294634 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " pod="openstack/mysqld-exporter-0" Feb 26 08:39:18 crc kubenswrapper[4741]: I0226 08:39:18.358988 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:39:22 crc kubenswrapper[4741]: I0226 08:39:22.965748 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 08:39:22 crc kubenswrapper[4741]: I0226 08:39:22.975546 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 08:39:23 crc kubenswrapper[4741]: I0226 08:39:23.309234 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 08:39:23 crc kubenswrapper[4741]: I0226 08:39:23.922189 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:39:23 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:39:23 crc kubenswrapper[4741]: > Feb 26 08:39:28 crc kubenswrapper[4741]: I0226 08:39:28.534701 4741 scope.go:117] "RemoveContainer" containerID="8ca4bf11803e59299e69c4b526175e94c1505ef7c1e7e5abc7187ee2253cfb1e" Feb 26 08:39:32 crc kubenswrapper[4741]: I0226 08:39:32.481983 4741 scope.go:117] "RemoveContainer" containerID="fd3adee9dd064679bf59a9b498991c55a5c0fbf66cfd63676141afa04ab9bf13" Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.065108 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.471288 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9zc9f" event={"ID":"91289eb1-fb29-4b09-9f36-1f5d250f6b39","Type":"ContainerStarted","Data":"0b079cdf3c864230332df251d5528b31746635305aa928f13abed5063f0d16c1"} Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.473606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2f7c78eb-9208-46a4-a062-0bb536df511d","Type":"ContainerStarted","Data":"edb0f1c585b259a8dcb0f02b69ce0931ba03bc81e9a5016215fe30d309fa5bdf"} Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.475150 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wz7kr" event={"ID":"2325772f-698b-4597-8918-0f46f598545e","Type":"ContainerStarted","Data":"ef1e45fe0d040c52aaf7160a0307b78d1f5dbd00d5b48914ae3125bcb0e7ab5d"} Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.513243 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9zc9f" podStartSLOduration=3.190553335 podStartE2EDuration="22.513195439s" podCreationTimestamp="2026-02-26 08:39:11 +0000 UTC" firstStartedPulling="2026-02-26 08:39:13.221619424 +0000 UTC m=+1588.217556811" lastFinishedPulling="2026-02-26 08:39:32.544261538 +0000 UTC m=+1607.540198915" observedRunningTime="2026-02-26 08:39:33.498667786 +0000 UTC m=+1608.494605193" watchObservedRunningTime="2026-02-26 08:39:33.513195439 +0000 UTC m=+1608.509132826" Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.531801 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wz7kr" podStartSLOduration=2.779421237 podStartE2EDuration="28.531771095s" podCreationTimestamp="2026-02-26 08:39:05 +0000 UTC" firstStartedPulling="2026-02-26 08:39:06.792770794 +0000 UTC m=+1581.788708181" lastFinishedPulling="2026-02-26 08:39:32.545120652 +0000 UTC m=+1607.541058039" observedRunningTime="2026-02-26 08:39:33.521257927 +0000 UTC m=+1608.517195334" watchObservedRunningTime="2026-02-26 08:39:33.531771095 +0000 UTC m=+1608.527708482" Feb 26 08:39:33 crc kubenswrapper[4741]: I0226 08:39:33.919747 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:39:33 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:39:33 crc kubenswrapper[4741]: > Feb 26 08:39:34 crc kubenswrapper[4741]: I0226 08:39:34.119968 4741 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podce5e2451-f816-4a9a-a18d-806eb3f5cf79"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podce5e2451-f816-4a9a-a18d-806eb3f5cf79] : Timed out while waiting for systemd to remove kubepods-besteffort-podce5e2451_f816_4a9a_a18d_806eb3f5cf79.slice" Feb 26 08:39:34 crc kubenswrapper[4741]: E0226 08:39:34.120044 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podce5e2451-f816-4a9a-a18d-806eb3f5cf79] : unable to destroy cgroup paths for cgroup [kubepods besteffort podce5e2451-f816-4a9a-a18d-806eb3f5cf79] : Timed out while waiting for systemd to remove kubepods-besteffort-podce5e2451_f816_4a9a_a18d_806eb3f5cf79.slice" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" podUID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" Feb 26 08:39:34 crc kubenswrapper[4741]: I0226 08:39:34.492868 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-zq6r6" Feb 26 08:39:35 crc kubenswrapper[4741]: I0226 08:39:35.507380 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2f7c78eb-9208-46a4-a062-0bb536df511d","Type":"ContainerStarted","Data":"105b02ef743e46b98835604bb69caaae59a63350fcaa5f62b6d2b022065ec3b7"} Feb 26 08:39:35 crc kubenswrapper[4741]: I0226 08:39:35.533726 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=16.83830975 podStartE2EDuration="18.533698956s" podCreationTimestamp="2026-02-26 08:39:17 +0000 UTC" firstStartedPulling="2026-02-26 08:39:33.080898807 +0000 UTC m=+1608.076836194" lastFinishedPulling="2026-02-26 08:39:34.776288023 +0000 UTC m=+1609.772225400" observedRunningTime="2026-02-26 08:39:35.523411424 +0000 UTC m=+1610.519348811" watchObservedRunningTime="2026-02-26 08:39:35.533698956 +0000 UTC m=+1610.529636343" Feb 26 08:39:37 crc kubenswrapper[4741]: I0226 08:39:37.559673 4741 generic.go:334] "Generic (PLEG): container finished" podID="91289eb1-fb29-4b09-9f36-1f5d250f6b39" containerID="0b079cdf3c864230332df251d5528b31746635305aa928f13abed5063f0d16c1" exitCode=0 Feb 26 08:39:37 crc kubenswrapper[4741]: I0226 08:39:37.559873 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9zc9f" event={"ID":"91289eb1-fb29-4b09-9f36-1f5d250f6b39","Type":"ContainerDied","Data":"0b079cdf3c864230332df251d5528b31746635305aa928f13abed5063f0d16c1"} Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.031490 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.172768 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle\") pod \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.173296 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data\") pod \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.173370 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2wt\" (UniqueName: \"kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt\") pod \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\" (UID: \"91289eb1-fb29-4b09-9f36-1f5d250f6b39\") " Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.181247 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt" (OuterVolumeSpecName: "kube-api-access-rm2wt") pod "91289eb1-fb29-4b09-9f36-1f5d250f6b39" (UID: "91289eb1-fb29-4b09-9f36-1f5d250f6b39"). InnerVolumeSpecName "kube-api-access-rm2wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.208153 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91289eb1-fb29-4b09-9f36-1f5d250f6b39" (UID: "91289eb1-fb29-4b09-9f36-1f5d250f6b39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.243484 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data" (OuterVolumeSpecName: "config-data") pod "91289eb1-fb29-4b09-9f36-1f5d250f6b39" (UID: "91289eb1-fb29-4b09-9f36-1f5d250f6b39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.276983 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.277394 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2wt\" (UniqueName: \"kubernetes.io/projected/91289eb1-fb29-4b09-9f36-1f5d250f6b39-kube-api-access-rm2wt\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.277496 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91289eb1-fb29-4b09-9f36-1f5d250f6b39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.591941 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9zc9f" event={"ID":"91289eb1-fb29-4b09-9f36-1f5d250f6b39","Type":"ContainerDied","Data":"01116fe1b9ba473681091d6e333f78c1246d8771085eb2ff694e94e50c1e2d83"} Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.592435 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01116fe1b9ba473681091d6e333f78c1246d8771085eb2ff694e94e50c1e2d83" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.592033 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9zc9f" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.973817 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:39 crc kubenswrapper[4741]: E0226 08:39:39.974451 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91289eb1-fb29-4b09-9f36-1f5d250f6b39" containerName="keystone-db-sync" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.974470 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="91289eb1-fb29-4b09-9f36-1f5d250f6b39" containerName="keystone-db-sync" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.974681 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="91289eb1-fb29-4b09-9f36-1f5d250f6b39" containerName="keystone-db-sync" Feb 26 08:39:39 crc kubenswrapper[4741]: I0226 08:39:39.976031 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.086201 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.102368 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.102520 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.102629 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9ml\" (UniqueName: \"kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.102677 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.102705 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.120211 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pffnd"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.122220 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.134333 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffnd"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.140058 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4srwh" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.140338 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.140376 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.140590 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.144748 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.210854 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc6fp\" (UniqueName: \"kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.210980 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211075 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211125 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9ml\" (UniqueName: \"kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211143 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211169 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211198 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211219 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211276 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.211328 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.212455 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.215401 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.215872 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.220973 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.303431 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-8n574"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.305265 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9ml\" (UniqueName: \"kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml\") pod \"dnsmasq-dns-f877ddd87-8rxn2\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.315076 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.350040 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc6fp\" (UniqueName: \"kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.350546 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.350628 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.350690 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.350883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.328415 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.360513 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.361880 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.364679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.368587 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.371751 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.373739 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8n574"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.373889 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.383432 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-r89pd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.392249 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.456945 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc6fp\" (UniqueName: \"kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp\") pod \"keystone-bootstrap-pffnd\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.460417 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmvv\" (UniqueName: \"kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.460517 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.460561 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.470958 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.561846 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.561931 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.562069 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzmvv\" (UniqueName: \"kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.612095 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.612136 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.622342 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wbg8p"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.625349 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.636081 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.645809 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pthp9" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.647320 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667169 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667250 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667328 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667354 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667389 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.667544 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hpc\" (UniqueName: \"kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.676009 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzmvv\" (UniqueName: \"kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv\") pod \"heat-db-sync-8n574\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.712622 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sd2kk"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.733022 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.735421 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wbg8p"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.750655 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.750822 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.750899 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zhhb5" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795257 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hpc\" (UniqueName: \"kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795413 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795471 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wn8\" (UniqueName: \"kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795538 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795561 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795724 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795762 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795824 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.795870 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.797016 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.841319 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.845463 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8n574" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.854384 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sd2kk"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.858419 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.868851 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.877436 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hpc\" (UniqueName: \"kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.899065 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle\") pod \"cinder-db-sync-wbg8p\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.902629 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.921702 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.928586 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6wn8\" (UniqueName: \"kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.928731 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.943172 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.969841 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:40 crc kubenswrapper[4741]: I0226 08:39:40.985315 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hfwv4"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.000838 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.021026 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.021244 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8tm7v" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.023044 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.041212 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6wn8\" (UniqueName: \"kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8\") pod \"neutron-db-sync-sd2kk\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.047300 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.072486 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.105859 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hfwv4"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.132462 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.137777 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.138007 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.138046 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.138088 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2qp\" (UniqueName: \"kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.138281 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.164244 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8mgkv"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.178981 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.200514 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8mgkv"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.200675 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253198 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253241 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8mv\" (UniqueName: \"kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253272 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253345 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253386 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253436 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253472 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253502 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2qp\" (UniqueName: \"kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.253565 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.254131 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.256186 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrgx" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.256529 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.257608 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.294673 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.304341 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.315022 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2qp\" (UniqueName: \"kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.334445 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data\") pod \"placement-db-sync-hfwv4\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356272 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8mv\" (UniqueName: \"kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356355 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356419 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356446 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d79v9\" (UniqueName: \"kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356477 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356524 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356546 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.356608 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.357710 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.358819 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.359422 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.361166 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.359956 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.372407 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.385819 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.386305 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.407881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8mv\" (UniqueName: \"kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv\") pod \"dnsmasq-dns-68dcc9cf6f-p65c6\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.414729 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.456375 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hfwv4" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459323 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459370 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459424 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459521 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459578 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459610 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hv7t\" (UniqueName: \"kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459655 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d79v9\" (UniqueName: \"kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459726 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459756 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.459779 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.472544 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.482501 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.490512 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.521802 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d79v9\" (UniqueName: \"kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9\") pod \"barbican-db-sync-8mgkv\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.565894 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.565952 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566001 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566104 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566220 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566265 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hv7t\" (UniqueName: \"kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566363 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.566675 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.567518 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.577908 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.585385 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.586620 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.591474 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.609266 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hv7t\" (UniqueName: \"kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t\") pod \"ceilometer-0\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " pod="openstack/ceilometer-0" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.706227 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.783095 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffnd"] Feb 26 08:39:41 crc kubenswrapper[4741]: I0226 08:39:41.828823 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.184845 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.333175 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8n574"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.540781 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sd2kk"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.581707 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hfwv4"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.739339 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" event={"ID":"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe","Type":"ContainerStarted","Data":"1d8120e112550d5e94ae186fb28fcb8aafa615d83b8c5be7231abeeaaf639135"} Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.752931 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hfwv4" event={"ID":"16e2e3de-8ab8-4670-b4c5-6375011e04e7","Type":"ContainerStarted","Data":"a841326b7f8c9ae960ff0d40d4e1ef18dcdef787e37f75f335c74a03b45a893d"} Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.759917 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffnd" event={"ID":"d47c90e6-d8ce-4f5a-91aa-34459da56120","Type":"ContainerStarted","Data":"7c39dc4f15522640f6cddeeac5ca3d717caa8692d2783fc3dce42c0dce996787"} Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.764908 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sd2kk" event={"ID":"03befef7-03ec-47b0-b178-46e527d8198e","Type":"ContainerStarted","Data":"ced821b839b90d07b9d32119af5083a922733851f0329ce5bc6aacbfc6256dd1"} Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.764974 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wbg8p"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.792600 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8n574" event={"ID":"61912c33-f4b2-4d1e-a2a0-df63c70ac97f","Type":"ContainerStarted","Data":"45f80b8094b718384643a69d98ef605ae9e49841d13f2b0b755705dcf2834026"} Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.796393 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8mgkv"] Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.835214 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:39:42 crc kubenswrapper[4741]: W0226 08:39:42.860433 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06589f08_740e_47d3_ae3b_a44edd8d0842.slice/crio-5f4443f7c368098f7675694d3b80f8b8e302880450da0482931d21428746d424 WatchSource:0}: Error finding container 5f4443f7c368098f7675694d3b80f8b8e302880450da0482931d21428746d424: Status 404 returned error can't find the container with id 5f4443f7c368098f7675694d3b80f8b8e302880450da0482931d21428746d424 Feb 26 08:39:42 crc kubenswrapper[4741]: I0226 08:39:42.998974 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.839599 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sd2kk" event={"ID":"03befef7-03ec-47b0-b178-46e527d8198e","Type":"ContainerStarted","Data":"6054d73673f66b7d855edaf87e94603cff02a30938e643fbfdca26b130d21776"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.850789 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wbg8p" event={"ID":"4d1395bb-ffb5-492e-b214-4434c210acf7","Type":"ContainerStarted","Data":"d410268ff833a525cd6d70d31f012a74070dcca5901407ca05d61498769650e2"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.856603 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerStarted","Data":"6cf7fb982771922d687603c5fcb704dbc28bdc9e387250c6379283c5eeb3b663"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.856704 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerStarted","Data":"5f4443f7c368098f7675694d3b80f8b8e302880450da0482931d21428746d424"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.862309 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerStarted","Data":"1d35289f01351770de694e3aab376618edae86aeb7a2a8314511811262ed52c2"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.867003 4741 generic.go:334] "Generic (PLEG): container finished" podID="8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" containerID="6262119b1db9334507fc53abbe90dcad075d7d8b55c7b5fc5ed4e68af37d8c88" exitCode=0 Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.866864 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" event={"ID":"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe","Type":"ContainerDied","Data":"6262119b1db9334507fc53abbe90dcad075d7d8b55c7b5fc5ed4e68af37d8c88"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.875612 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8mgkv" event={"ID":"453e119a-80ff-4c19-b7d0-0860410fcc09","Type":"ContainerStarted","Data":"f4715375cac84d4b8445d494ceda2a169085cb35dc95c8ff165aa70f1e1c0e54"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.886976 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sd2kk" podStartSLOduration=3.886942366 podStartE2EDuration="3.886942366s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:43.874695389 +0000 UTC m=+1618.870632776" watchObservedRunningTime="2026-02-26 08:39:43.886942366 +0000 UTC m=+1618.882879753" Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.890026 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffnd" event={"ID":"d47c90e6-d8ce-4f5a-91aa-34459da56120","Type":"ContainerStarted","Data":"ebc8747182dca40700dcefb35fbf6229f718e2eff5b67a4ca0928b401692199d"} Feb 26 08:39:43 crc kubenswrapper[4741]: I0226 08:39:43.942838 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:39:44 crc kubenswrapper[4741]: I0226 08:39:44.001097 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pffnd" podStartSLOduration=5.001067263 podStartE2EDuration="5.001067263s" podCreationTimestamp="2026-02-26 08:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:43.975082996 +0000 UTC m=+1618.971020383" watchObservedRunningTime="2026-02-26 08:39:44.001067263 +0000 UTC m=+1618.997004650" Feb 26 08:39:44 crc kubenswrapper[4741]: I0226 08:39:44.005325 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:39:44 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:39:44 crc kubenswrapper[4741]: > Feb 26 08:39:44 crc kubenswrapper[4741]: I0226 08:39:44.005688 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:39:44 crc kubenswrapper[4741]: I0226 08:39:44.006936 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8"} pod="openshift-marketplace/redhat-operators-7mjbs" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 08:39:44 crc kubenswrapper[4741]: I0226 08:39:44.007073 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" containerID="cri-o://c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8" gracePeriod=30 Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.067976 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.069733 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" event={"ID":"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe","Type":"ContainerDied","Data":"1d8120e112550d5e94ae186fb28fcb8aafa615d83b8c5be7231abeeaaf639135"} Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.069789 4741 scope.go:117] "RemoveContainer" containerID="6262119b1db9334507fc53abbe90dcad075d7d8b55c7b5fc5ed4e68af37d8c88" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.074917 4741 generic.go:334] "Generic (PLEG): container finished" podID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerID="6cf7fb982771922d687603c5fcb704dbc28bdc9e387250c6379283c5eeb3b663" exitCode=0 Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.075128 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerDied","Data":"6cf7fb982771922d687603c5fcb704dbc28bdc9e387250c6379283c5eeb3b663"} Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.418330 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb\") pod \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.418408 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h9ml\" (UniqueName: \"kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml\") pod \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.418535 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb\") pod \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.418700 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config\") pod \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.418933 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc\") pod \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\" (UID: \"8f7ed389-a009-4eaf-9186-5dd4ef16c9fe\") " Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.429630 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml" (OuterVolumeSpecName: "kube-api-access-5h9ml") pod "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" (UID: "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe"). InnerVolumeSpecName "kube-api-access-5h9ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.482414 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" (UID: "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.503289 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" (UID: "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.519418 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" (UID: "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.521684 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config" (OuterVolumeSpecName: "config") pod "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" (UID: "8f7ed389-a009-4eaf-9186-5dd4ef16c9fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.528788 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.528843 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.528861 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h9ml\" (UniqueName: \"kubernetes.io/projected/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-kube-api-access-5h9ml\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.528879 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:45 crc kubenswrapper[4741]: I0226 08:39:45.528891 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.111207 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerStarted","Data":"70fded0bbc9ca37fa5937d559e33a82b74f43dd052ebc2c3988eb3ce8649c2da"} Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.116443 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.115860 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8rxn2" Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.169345 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" podStartSLOduration=6.169309731 podStartE2EDuration="6.169309731s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:39:46.158741882 +0000 UTC m=+1621.154679289" watchObservedRunningTime="2026-02-26 08:39:46.169309731 +0000 UTC m=+1621.165247118" Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.240531 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:46 crc kubenswrapper[4741]: I0226 08:39:46.256364 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8rxn2"] Feb 26 08:39:47 crc kubenswrapper[4741]: I0226 08:39:47.809457 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" path="/var/lib/kubelet/pods/8f7ed389-a009-4eaf-9186-5dd4ef16c9fe/volumes" Feb 26 08:39:51 crc kubenswrapper[4741]: I0226 08:39:51.493002 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:39:51 crc kubenswrapper[4741]: I0226 08:39:51.582389 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:39:51 crc kubenswrapper[4741]: I0226 08:39:51.583185 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" containerID="cri-o://289bc94fd6a05120ecf2e29cabec867f1951a2472cfecdab06eb528d0ab5242c" gracePeriod=10 Feb 26 08:39:52 crc kubenswrapper[4741]: I0226 08:39:52.224289 4741 generic.go:334] "Generic (PLEG): container finished" podID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerID="289bc94fd6a05120ecf2e29cabec867f1951a2472cfecdab06eb528d0ab5242c" exitCode=0 Feb 26 08:39:52 crc kubenswrapper[4741]: I0226 08:39:52.224381 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerDied","Data":"289bc94fd6a05120ecf2e29cabec867f1951a2472cfecdab06eb528d0ab5242c"} Feb 26 08:39:52 crc kubenswrapper[4741]: I0226 08:39:52.596795 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Feb 26 08:39:55 crc kubenswrapper[4741]: I0226 08:39:55.153633 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:39:55 crc kubenswrapper[4741]: I0226 08:39:55.154180 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:39:55 crc kubenswrapper[4741]: I0226 08:39:55.269712 4741 generic.go:334] "Generic (PLEG): container finished" podID="d47c90e6-d8ce-4f5a-91aa-34459da56120" containerID="ebc8747182dca40700dcefb35fbf6229f718e2eff5b67a4ca0928b401692199d" exitCode=0 Feb 26 08:39:55 crc kubenswrapper[4741]: I0226 08:39:55.269769 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffnd" event={"ID":"d47c90e6-d8ce-4f5a-91aa-34459da56120","Type":"ContainerDied","Data":"ebc8747182dca40700dcefb35fbf6229f718e2eff5b67a4ca0928b401692199d"} Feb 26 08:39:56 crc kubenswrapper[4741]: E0226 08:39:56.488453 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="91b0231b-fbdf-4714-ac14-d3621c8c7807" Feb 26 08:39:57 crc kubenswrapper[4741]: I0226 08:39:57.297814 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 08:39:57 crc kubenswrapper[4741]: I0226 08:39:57.596330 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.144159 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534920-nwmvg"] Feb 26 08:40:00 crc kubenswrapper[4741]: E0226 08:40:00.145811 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" containerName="init" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.145829 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" containerName="init" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.146067 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7ed389-a009-4eaf-9186-5dd4ef16c9fe" containerName="init" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.149655 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.152297 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.152419 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.152570 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.159252 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534920-nwmvg"] Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.223936 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npspj\" (UniqueName: \"kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj\") pod \"auto-csr-approver-29534920-nwmvg\" (UID: \"bc7465b7-f180-4fe8-9c29-3e75da8c867c\") " pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.326513 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npspj\" (UniqueName: \"kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj\") pod \"auto-csr-approver-29534920-nwmvg\" (UID: \"bc7465b7-f180-4fe8-9c29-3e75da8c867c\") " pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.355702 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npspj\" (UniqueName: \"kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj\") pod \"auto-csr-approver-29534920-nwmvg\" (UID: \"bc7465b7-f180-4fe8-9c29-3e75da8c867c\") " pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:00 crc kubenswrapper[4741]: I0226 08:40:00.472488 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.118624 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.123249 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.139415 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.262617 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.262736 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.263001 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gswb\" (UniqueName: \"kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.356573 4741 generic.go:334] "Generic (PLEG): container finished" podID="2325772f-698b-4597-8918-0f46f598545e" containerID="ef1e45fe0d040c52aaf7160a0307b78d1f5dbd00d5b48914ae3125bcb0e7ab5d" exitCode=0 Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.356668 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wz7kr" event={"ID":"2325772f-698b-4597-8918-0f46f598545e","Type":"ContainerDied","Data":"ef1e45fe0d040c52aaf7160a0307b78d1f5dbd00d5b48914ae3125bcb0e7ab5d"} Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.366306 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.366391 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.366565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gswb\" (UniqueName: \"kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.366780 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.366889 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.401293 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gswb\" (UniqueName: \"kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb\") pod \"certified-operators-cf7rh\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.467681 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.468193 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.484131 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91b0231b-fbdf-4714-ac14-d3621c8c7807-etc-swift\") pod \"swift-storage-0\" (UID: \"91b0231b-fbdf-4714-ac14-d3621c8c7807\") " pod="openstack/swift-storage-0" Feb 26 08:40:01 crc kubenswrapper[4741]: I0226 08:40:01.501530 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 08:40:02 crc kubenswrapper[4741]: I0226 08:40:02.373019 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerID="c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8" exitCode=0 Feb 26 08:40:02 crc kubenswrapper[4741]: I0226 08:40:02.373196 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerDied","Data":"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8"} Feb 26 08:40:07 crc kubenswrapper[4741]: I0226 08:40:07.597024 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:07 crc kubenswrapper[4741]: I0226 08:40:07.598158 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:40:12 crc kubenswrapper[4741]: I0226 08:40:12.653834 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:16 crc kubenswrapper[4741]: E0226 08:40:16.136014 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 26 08:40:16 crc kubenswrapper[4741]: E0226 08:40:16.137068 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bhf9h5b6h64fhbch66ch56fh5ch699h69h689h5cfh5c7h6dhb5hfh89h5fh557h78h8ch565h569hfch589h696h65bhdbh668hch5f8h576q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hv7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(bbaee063-eb59-4c8e-b482-de4efc08084a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.138039 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.336873 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.336986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.337139 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.337204 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.337338 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc6fp\" (UniqueName: \"kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.337374 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data\") pod \"d47c90e6-d8ce-4f5a-91aa-34459da56120\" (UID: \"d47c90e6-d8ce-4f5a-91aa-34459da56120\") " Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.349991 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.350184 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts" (OuterVolumeSpecName: "scripts") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.350200 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.353240 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp" (OuterVolumeSpecName: "kube-api-access-cc6fp") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "kube-api-access-cc6fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.375172 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.380665 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data" (OuterVolumeSpecName: "config-data") pod "d47c90e6-d8ce-4f5a-91aa-34459da56120" (UID: "d47c90e6-d8ce-4f5a-91aa-34459da56120"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440199 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440719 4741 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440737 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc6fp\" (UniqueName: \"kubernetes.io/projected/d47c90e6-d8ce-4f5a-91aa-34459da56120-kube-api-access-cc6fp\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440751 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440764 4741 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.440774 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47c90e6-d8ce-4f5a-91aa-34459da56120-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.562922 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffnd" event={"ID":"d47c90e6-d8ce-4f5a-91aa-34459da56120","Type":"ContainerDied","Data":"7c39dc4f15522640f6cddeeac5ca3d717caa8692d2783fc3dce42c0dce996787"} Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.562977 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c39dc4f15522640f6cddeeac5ca3d717caa8692d2783fc3dce42c0dce996787" Feb 26 08:40:16 crc kubenswrapper[4741]: I0226 08:40:16.563020 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffnd" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.249840 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pffnd"] Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.265542 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pffnd"] Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.360729 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4w2q8"] Feb 26 08:40:17 crc kubenswrapper[4741]: E0226 08:40:17.361623 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47c90e6-d8ce-4f5a-91aa-34459da56120" containerName="keystone-bootstrap" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.361650 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47c90e6-d8ce-4f5a-91aa-34459da56120" containerName="keystone-bootstrap" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.362393 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47c90e6-d8ce-4f5a-91aa-34459da56120" containerName="keystone-bootstrap" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.364047 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.367408 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.367408 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.367583 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.370484 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4srwh" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.370838 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.381955 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4w2q8"] Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.467773 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.467847 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.467919 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.468061 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2lkl\" (UniqueName: \"kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.468225 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.468328 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.586863 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.586950 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.587380 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.587497 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2lkl\" (UniqueName: \"kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.587565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.587626 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.602539 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.604083 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.626996 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.627017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.627187 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.655458 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.749086 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2lkl\" (UniqueName: \"kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl\") pod \"keystone-bootstrap-4w2q8\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.806237 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47c90e6-d8ce-4f5a-91aa-34459da56120" path="/var/lib/kubelet/pods/d47c90e6-d8ce-4f5a-91aa-34459da56120/volumes" Feb 26 08:40:17 crc kubenswrapper[4741]: I0226 08:40:17.993467 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.656299 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.735973 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.740367 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.768339 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.848941 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.849014 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxn4\" (UniqueName: \"kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.849973 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.952947 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.953154 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:22 crc kubenswrapper[4741]: I0226 08:40:22.953193 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxn4\" (UniqueName: \"kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:23 crc kubenswrapper[4741]: I0226 08:40:23.013384 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:23 crc kubenswrapper[4741]: I0226 08:40:23.020520 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxn4\" (UniqueName: \"kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:23 crc kubenswrapper[4741]: I0226 08:40:23.023554 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities\") pod \"redhat-marketplace-ldrfn\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:23 crc kubenswrapper[4741]: I0226 08:40:23.076411 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:25 crc kubenswrapper[4741]: I0226 08:40:25.149360 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:40:25 crc kubenswrapper[4741]: I0226 08:40:25.151678 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:40:27 crc kubenswrapper[4741]: I0226 08:40:27.657619 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.345030 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wz7kr" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.362587 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.527881 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcnt5\" (UniqueName: \"kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5\") pod \"2325772f-698b-4597-8918-0f46f598545e\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528339 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb\") pod \"bd73291d-c1d5-4595-89e0-f756eca4ee23\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528365 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle\") pod \"2325772f-698b-4597-8918-0f46f598545e\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528481 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc\") pod \"bd73291d-c1d5-4595-89e0-f756eca4ee23\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528535 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data\") pod \"2325772f-698b-4597-8918-0f46f598545e\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528665 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb\") pod \"bd73291d-c1d5-4595-89e0-f756eca4ee23\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528706 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data\") pod \"2325772f-698b-4597-8918-0f46f598545e\" (UID: \"2325772f-698b-4597-8918-0f46f598545e\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528784 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5zbw\" (UniqueName: \"kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw\") pod \"bd73291d-c1d5-4595-89e0-f756eca4ee23\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.528819 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config\") pod \"bd73291d-c1d5-4595-89e0-f756eca4ee23\" (UID: \"bd73291d-c1d5-4595-89e0-f756eca4ee23\") " Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.534482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5" (OuterVolumeSpecName: "kube-api-access-fcnt5") pod "2325772f-698b-4597-8918-0f46f598545e" (UID: "2325772f-698b-4597-8918-0f46f598545e"). InnerVolumeSpecName "kube-api-access-fcnt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.540458 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2325772f-698b-4597-8918-0f46f598545e" (UID: "2325772f-698b-4597-8918-0f46f598545e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.541882 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw" (OuterVolumeSpecName: "kube-api-access-s5zbw") pod "bd73291d-c1d5-4595-89e0-f756eca4ee23" (UID: "bd73291d-c1d5-4595-89e0-f756eca4ee23"). InnerVolumeSpecName "kube-api-access-s5zbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.565387 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2325772f-698b-4597-8918-0f46f598545e" (UID: "2325772f-698b-4597-8918-0f46f598545e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.593043 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd73291d-c1d5-4595-89e0-f756eca4ee23" (UID: "bd73291d-c1d5-4595-89e0-f756eca4ee23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.595797 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd73291d-c1d5-4595-89e0-f756eca4ee23" (UID: "bd73291d-c1d5-4595-89e0-f756eca4ee23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.598860 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config" (OuterVolumeSpecName: "config") pod "bd73291d-c1d5-4595-89e0-f756eca4ee23" (UID: "bd73291d-c1d5-4595-89e0-f756eca4ee23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.623482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd73291d-c1d5-4595-89e0-f756eca4ee23" (UID: "bd73291d-c1d5-4595-89e0-f756eca4ee23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.631918 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.631956 4741 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.631972 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5zbw\" (UniqueName: \"kubernetes.io/projected/bd73291d-c1d5-4595-89e0-f756eca4ee23-kube-api-access-s5zbw\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.631989 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.632000 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcnt5\" (UniqueName: \"kubernetes.io/projected/2325772f-698b-4597-8918-0f46f598545e-kube-api-access-fcnt5\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.632011 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.632021 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.632034 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd73291d-c1d5-4595-89e0-f756eca4ee23-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.632533 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data" (OuterVolumeSpecName: "config-data") pod "2325772f-698b-4597-8918-0f46f598545e" (UID: "2325772f-698b-4597-8918-0f46f598545e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.659142 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tqsq4" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.736790 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2325772f-698b-4597-8918-0f46f598545e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.779477 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wz7kr" event={"ID":"2325772f-698b-4597-8918-0f46f598545e","Type":"ContainerDied","Data":"8e8ac8f4e5b05b7c6e9145c660c1005baedbffe10147ae4f25ddb35c74939b14"} Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.779570 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8ac8f4e5b05b7c6e9145c660c1005baedbffe10147ae4f25ddb35c74939b14" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.779516 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wz7kr" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.785022 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tqsq4" event={"ID":"bd73291d-c1d5-4595-89e0-f756eca4ee23","Type":"ContainerDied","Data":"7452453b63ef4175fe7c0b7aa2980554b58c2a7490ad76bd17aa737528ae68b4"} Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.785078 4741 scope.go:117] "RemoveContainer" containerID="289bc94fd6a05120ecf2e29cabec867f1951a2472cfecdab06eb528d0ab5242c" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.785191 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tqsq4" Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.852031 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:40:32 crc kubenswrapper[4741]: I0226 08:40:32.864831 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tqsq4"] Feb 26 08:40:33 crc kubenswrapper[4741]: I0226 08:40:33.802861 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" path="/var/lib/kubelet/pods/bd73291d-c1d5-4595-89e0-f756eca4ee23/volumes" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.514941 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:40:34 crc kubenswrapper[4741]: E0226 08:40:34.516167 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2325772f-698b-4597-8918-0f46f598545e" containerName="glance-db-sync" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.516184 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2325772f-698b-4597-8918-0f46f598545e" containerName="glance-db-sync" Feb 26 08:40:34 crc kubenswrapper[4741]: E0226 08:40:34.516212 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.516218 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" Feb 26 08:40:34 crc kubenswrapper[4741]: E0226 08:40:34.516230 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="init" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.516237 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="init" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.516468 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd73291d-c1d5-4595-89e0-f756eca4ee23" containerName="dnsmasq-dns" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.516480 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2325772f-698b-4597-8918-0f46f598545e" containerName="glance-db-sync" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.517978 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.565950 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.685381 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldqn\" (UniqueName: \"kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.685543 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.685627 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.685655 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.685696 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.788428 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldqn\" (UniqueName: \"kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.788576 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.788644 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.788669 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.788705 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.789726 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.789745 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.789776 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.789745 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.820301 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldqn\" (UniqueName: \"kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn\") pod \"dnsmasq-dns-f84976bdf-ddszv\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:34 crc kubenswrapper[4741]: I0226 08:40:34.867808 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.228948 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.232424 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.240007 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.240315 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7wdvc" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.241895 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.255124 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.410525 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.410592 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.410646 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.410674 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9gx\" (UniqueName: \"kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.410997 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.411303 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.411709 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514297 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514414 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514470 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514499 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514541 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514569 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd9gx\" (UniqueName: \"kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514636 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.514978 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.515208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.520977 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.521029 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66385550d4571b43986ab0832fdd5f11a5f2b8cdc4d8f3b6edc74982f484140d/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.521987 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.522661 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.542161 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.546854 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd9gx\" (UniqueName: \"kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.617339 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.685705 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.688235 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.696564 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.722837 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.841781 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.841951 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.842008 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.842211 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7td9\" (UniqueName: \"kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.842256 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.842294 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.842486 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.885814 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.946234 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7td9\" (UniqueName: \"kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.946330 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.946393 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.948755 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.949295 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.949411 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.949678 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.949775 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.950169 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.968434 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.979206 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.981576 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.981605 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7td9\" (UniqueName: \"kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.983259 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/417e49831a1cdf1c972733aab859fe5a1181b30877fdb96884b853b109c5ec95/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:35 crc kubenswrapper[4741]: I0226 08:40:35.985652 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:36 crc kubenswrapper[4741]: I0226 08:40:36.039951 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:36 crc kubenswrapper[4741]: I0226 08:40:36.316655 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:37 crc kubenswrapper[4741]: I0226 08:40:37.377940 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:37 crc kubenswrapper[4741]: I0226 08:40:37.499463 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:37 crc kubenswrapper[4741]: E0226 08:40:37.987101 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 26 08:40:37 crc kubenswrapper[4741]: E0226 08:40:37.987345 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d79v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-8mgkv_openstack(453e119a-80ff-4c19-b7d0-0860410fcc09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:40:37 crc kubenswrapper[4741]: E0226 08:40:37.988857 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-8mgkv" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" Feb 26 08:40:38 crc kubenswrapper[4741]: E0226 08:40:38.036518 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 26 08:40:38 crc kubenswrapper[4741]: E0226 08:40:38.036736 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzmvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-8n574_openstack(61912c33-f4b2-4d1e-a2a0-df63c70ac97f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:40:38 crc kubenswrapper[4741]: E0226 08:40:38.038490 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-8n574" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" Feb 26 08:40:38 crc kubenswrapper[4741]: I0226 08:40:38.538525 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534920-nwmvg"] Feb 26 08:40:38 crc kubenswrapper[4741]: E0226 08:40:38.866344 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-8mgkv" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" Feb 26 08:40:38 crc kubenswrapper[4741]: E0226 08:40:38.866401 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-8n574" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" Feb 26 08:40:39 crc kubenswrapper[4741]: E0226 08:40:39.665261 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 08:40:39 crc kubenswrapper[4741]: E0226 08:40:39.665482 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5hpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wbg8p_openstack(4d1395bb-ffb5-492e-b214-4434c210acf7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:40:39 crc kubenswrapper[4741]: E0226 08:40:39.667501 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wbg8p" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" Feb 26 08:40:39 crc kubenswrapper[4741]: W0226 08:40:39.682841 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7465b7_f180_4fe8_9c29_3e75da8c867c.slice/crio-d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179 WatchSource:0}: Error finding container d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179: Status 404 returned error can't find the container with id d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179 Feb 26 08:40:39 crc kubenswrapper[4741]: I0226 08:40:39.911607 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" event={"ID":"bc7465b7-f180-4fe8-9c29-3e75da8c867c","Type":"ContainerStarted","Data":"d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179"} Feb 26 08:40:39 crc kubenswrapper[4741]: E0226 08:40:39.936747 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wbg8p" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.203429 4741 scope.go:117] "RemoveContainer" containerID="8cebb6ecdcb821c3964974b0456d405b8619f9e6a8841ed2eb3f743c5fa01c99" Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.324716 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.717847 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:40:40 crc kubenswrapper[4741]: W0226 08:40:40.726421 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37f72b3_ec4f_4fb8_b730_d7850bbbb964.slice/crio-c3aed75e7c0cec3b28b785f791bbe4b6026364c2779d727d053826e362f3028d WatchSource:0}: Error finding container c3aed75e7c0cec3b28b785f791bbe4b6026364c2779d727d053826e362f3028d: Status 404 returned error can't find the container with id c3aed75e7c0cec3b28b785f791bbe4b6026364c2779d727d053826e362f3028d Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.954740 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.957992 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerStarted","Data":"c3aed75e7c0cec3b28b785f791bbe4b6026364c2779d727d053826e362f3028d"} Feb 26 08:40:40 crc kubenswrapper[4741]: W0226 08:40:40.959636 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbbec2b_c3f8_406b_ac5a_6c9749b1631d.slice/crio-2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a WatchSource:0}: Error finding container 2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a: Status 404 returned error can't find the container with id 2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.969152 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"c7b01902ec99e54bfcd63dc2a4e96b3dd0e62286ee3b2ac6607359841448e194"} Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.973233 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4w2q8"] Feb 26 08:40:40 crc kubenswrapper[4741]: I0226 08:40:40.988474 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:40:41 crc kubenswrapper[4741]: I0226 08:40:41.172486 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:41 crc kubenswrapper[4741]: I0226 08:40:41.306451 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:41 crc kubenswrapper[4741]: W0226 08:40:41.340029 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa53b25_50bc_43dc_ba54_dcda79b2c911.slice/crio-7eee75140a3c84968d857b54c495c9f420d1dd091c4962d904eb472d7a1b830c WatchSource:0}: Error finding container 7eee75140a3c84968d857b54c495c9f420d1dd091c4962d904eb472d7a1b830c: Status 404 returned error can't find the container with id 7eee75140a3c84968d857b54c495c9f420d1dd091c4962d904eb472d7a1b830c Feb 26 08:40:41 crc kubenswrapper[4741]: I0226 08:40:41.992698 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerStarted","Data":"9941961b65e63fc70e1e3738177edfe9571f1ed1b7322139645f4a9845b664bd"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.003347 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerStarted","Data":"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.013825 4741 generic.go:334] "Generic (PLEG): container finished" podID="959240c9-4eb8-4236-9927-758997ebf0a0" containerID="bef51badf4218dc2572110d55bf70f30af64faae6455c98c4081a4e0906723b5" exitCode=0 Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.013941 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerDied","Data":"bef51badf4218dc2572110d55bf70f30af64faae6455c98c4081a4e0906723b5"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.013990 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerStarted","Data":"389e4a86930dc265119eae203935a1e897cc73f6bb54911480ef21b190aa1eb5"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.019126 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hfwv4" event={"ID":"16e2e3de-8ab8-4670-b4c5-6375011e04e7","Type":"ContainerStarted","Data":"91c0e440581e76678224111a00fdd7560adc835d87192175671a3f0a096b4a80"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.022662 4741 generic.go:334] "Generic (PLEG): container finished" podID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerID="f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568" exitCode=0 Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.022749 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerDied","Data":"f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.032369 4741 generic.go:334] "Generic (PLEG): container finished" podID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerID="bd1a1dae9d89920e94d643d2b52c1aa6549da38ee45cd5b8fbf9c0a7eabb476b" exitCode=0 Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.032485 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" event={"ID":"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697","Type":"ContainerDied","Data":"bd1a1dae9d89920e94d643d2b52c1aa6549da38ee45cd5b8fbf9c0a7eabb476b"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.032522 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" event={"ID":"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697","Type":"ContainerStarted","Data":"8b451f9f3d255e7522224a923d988eb3b0bfbe3b9df910d89288c2a763363975"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.036243 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4w2q8" event={"ID":"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d","Type":"ContainerStarted","Data":"7d2d5c20e4d9d41731ee614841e831503fb1b42a2e5f3c23d08147b0a2021c5f"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.036319 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4w2q8" event={"ID":"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d","Type":"ContainerStarted","Data":"2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.041163 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" event={"ID":"bc7465b7-f180-4fe8-9c29-3e75da8c867c","Type":"ContainerStarted","Data":"725526e53136da120a04d3bd2776ddbc05e51d9c0b6eb2b02aa75da41e26bf30"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.043074 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerStarted","Data":"d30f9917b993daa7adfeb374a4c53831c372494c08475aab31c793ba0fbdd6d7"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.071556 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerStarted","Data":"7eee75140a3c84968d857b54c495c9f420d1dd091c4962d904eb472d7a1b830c"} Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.081464 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hfwv4" podStartSLOduration=4.535400669 podStartE2EDuration="1m2.081437383s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="2026-02-26 08:39:42.570700155 +0000 UTC m=+1617.566637542" lastFinishedPulling="2026-02-26 08:40:40.116736869 +0000 UTC m=+1675.112674256" observedRunningTime="2026-02-26 08:40:42.052719449 +0000 UTC m=+1677.048656836" watchObservedRunningTime="2026-02-26 08:40:42.081437383 +0000 UTC m=+1677.077374770" Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.128199 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4w2q8" podStartSLOduration=25.128177939 podStartE2EDuration="25.128177939s" podCreationTimestamp="2026-02-26 08:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:42.116263051 +0000 UTC m=+1677.112200438" watchObservedRunningTime="2026-02-26 08:40:42.128177939 +0000 UTC m=+1677.124115326" Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.868337 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:40:42 crc kubenswrapper[4741]: I0226 08:40:42.869594 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.101390 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"ad6795c724183844390247dd80ca1594baa8de66cb8e6c343cebaa5a2567b94f"} Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.110589 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerStarted","Data":"d4c6f2c4af7569f479726c16ce93172d6fa470d6a0ee0e3f944484b69a7e29a3"} Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.119304 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" event={"ID":"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697","Type":"ContainerStarted","Data":"53b5b159428f0c7df8c7063e26d98a629a0fb8abe18e390cf84bb18dbb3c04bf"} Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.119469 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.128703 4741 generic.go:334] "Generic (PLEG): container finished" podID="bc7465b7-f180-4fe8-9c29-3e75da8c867c" containerID="725526e53136da120a04d3bd2776ddbc05e51d9c0b6eb2b02aa75da41e26bf30" exitCode=0 Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.129243 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" event={"ID":"bc7465b7-f180-4fe8-9c29-3e75da8c867c","Type":"ContainerDied","Data":"725526e53136da120a04d3bd2776ddbc05e51d9c0b6eb2b02aa75da41e26bf30"} Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.134695 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerStarted","Data":"22d0cba31872ed8a2065c6f7ce011e181a71a2de1b475adc157c8c47647104d2"} Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.177263 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" podStartSLOduration=9.177224803 podStartE2EDuration="9.177224803s" podCreationTimestamp="2026-02-26 08:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:43.149203708 +0000 UTC m=+1678.145141115" watchObservedRunningTime="2026-02-26 08:40:43.177224803 +0000 UTC m=+1678.173162190" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.673759 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.722331 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npspj\" (UniqueName: \"kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj\") pod \"bc7465b7-f180-4fe8-9c29-3e75da8c867c\" (UID: \"bc7465b7-f180-4fe8-9c29-3e75da8c867c\") " Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.748663 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj" (OuterVolumeSpecName: "kube-api-access-npspj") pod "bc7465b7-f180-4fe8-9c29-3e75da8c867c" (UID: "bc7465b7-f180-4fe8-9c29-3e75da8c867c"). InnerVolumeSpecName "kube-api-access-npspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.830658 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npspj\" (UniqueName: \"kubernetes.io/projected/bc7465b7-f180-4fe8-9c29-3e75da8c867c-kube-api-access-npspj\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:43 crc kubenswrapper[4741]: I0226 08:40:43.927314 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:40:43 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:40:43 crc kubenswrapper[4741]: > Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.153432 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerStarted","Data":"dee6721ce555d47c9b3fba111eb6bdd064055f42da703224071222561104af0f"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.159294 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerStarted","Data":"f404ed2255439d54bf37907fbe1da6f4266696999eabe3fc02e490557efb87e7"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.159465 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-log" containerID="cri-o://22d0cba31872ed8a2065c6f7ce011e181a71a2de1b475adc157c8c47647104d2" gracePeriod=30 Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.159498 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-httpd" containerID="cri-o://f404ed2255439d54bf37907fbe1da6f4266696999eabe3fc02e490557efb87e7" gracePeriod=30 Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.172740 4741 generic.go:334] "Generic (PLEG): container finished" podID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerID="2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72" exitCode=0 Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.172802 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerDied","Data":"2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.176504 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"14a02a3c48caa2cd6094f1a2f815f5c5a34da5ee67817540a0aaee29a688f902"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.176566 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"1057837f69e264e4fba0096348c032b0242182548ea2c1dffc3f607624a8c9f9"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.183753 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerStarted","Data":"529283268feec021e63b81c337a94fe3f53aa09b8d012db314f5540713bf7355"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.188895 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-log" containerID="cri-o://d4c6f2c4af7569f479726c16ce93172d6fa470d6a0ee0e3f944484b69a7e29a3" gracePeriod=30 Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.189425 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-httpd" containerID="cri-o://529283268feec021e63b81c337a94fe3f53aa09b8d012db314f5540713bf7355" gracePeriod=30 Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.207422 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.207934 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534920-nwmvg" event={"ID":"bc7465b7-f180-4fe8-9c29-3e75da8c867c","Type":"ContainerDied","Data":"d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179"} Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.207962 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d211f238d1249c70e1ece52564085f1b8f2634e2d4aee683fd533f4340975179" Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.238830 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.238801813 podStartE2EDuration="10.238801813s" podCreationTimestamp="2026-02-26 08:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:44.230272891 +0000 UTC m=+1679.226210278" watchObservedRunningTime="2026-02-26 08:40:44.238801813 +0000 UTC m=+1679.234739200" Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.294691 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.294669337 podStartE2EDuration="10.294669337s" podCreationTimestamp="2026-02-26 08:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:44.289176822 +0000 UTC m=+1679.285114229" watchObservedRunningTime="2026-02-26 08:40:44.294669337 +0000 UTC m=+1679.290606724" Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.834405 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534914-q2xf7"] Feb 26 08:40:44 crc kubenswrapper[4741]: I0226 08:40:44.864306 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534914-q2xf7"] Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.222753 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerID="529283268feec021e63b81c337a94fe3f53aa09b8d012db314f5540713bf7355" exitCode=143 Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.223414 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerID="d4c6f2c4af7569f479726c16ce93172d6fa470d6a0ee0e3f944484b69a7e29a3" exitCode=143 Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.222883 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerDied","Data":"529283268feec021e63b81c337a94fe3f53aa09b8d012db314f5540713bf7355"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.223541 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerDied","Data":"d4c6f2c4af7569f479726c16ce93172d6fa470d6a0ee0e3f944484b69a7e29a3"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.232231 4741 generic.go:334] "Generic (PLEG): container finished" podID="959240c9-4eb8-4236-9927-758997ebf0a0" containerID="dee6721ce555d47c9b3fba111eb6bdd064055f42da703224071222561104af0f" exitCode=0 Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.233311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerDied","Data":"dee6721ce555d47c9b3fba111eb6bdd064055f42da703224071222561104af0f"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.240376 4741 generic.go:334] "Generic (PLEG): container finished" podID="99620102-91cd-41b0-a17d-3fc318bef67e" containerID="f404ed2255439d54bf37907fbe1da6f4266696999eabe3fc02e490557efb87e7" exitCode=143 Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.240409 4741 generic.go:334] "Generic (PLEG): container finished" podID="99620102-91cd-41b0-a17d-3fc318bef67e" containerID="22d0cba31872ed8a2065c6f7ce011e181a71a2de1b475adc157c8c47647104d2" exitCode=143 Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.240456 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerDied","Data":"f404ed2255439d54bf37907fbe1da6f4266696999eabe3fc02e490557efb87e7"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.240510 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerDied","Data":"22d0cba31872ed8a2065c6f7ce011e181a71a2de1b475adc157c8c47647104d2"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.245559 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"f1685189428fb2afcc11800a5a299eb7c4cd065a5387830c95dfdbccc0ec7d28"} Feb 26 08:40:45 crc kubenswrapper[4741]: I0226 08:40:45.805407 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92daa308-ea4d-4e9d-aa8a-59f011190b00" path="/var/lib/kubelet/pods/92daa308-ea4d-4e9d-aa8a-59f011190b00/volumes" Feb 26 08:40:47 crc kubenswrapper[4741]: I0226 08:40:47.276673 4741 generic.go:334] "Generic (PLEG): container finished" podID="16e2e3de-8ab8-4670-b4c5-6375011e04e7" containerID="91c0e440581e76678224111a00fdd7560adc835d87192175671a3f0a096b4a80" exitCode=0 Feb 26 08:40:47 crc kubenswrapper[4741]: I0226 08:40:47.276793 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hfwv4" event={"ID":"16e2e3de-8ab8-4670-b4c5-6375011e04e7","Type":"ContainerDied","Data":"91c0e440581e76678224111a00fdd7560adc835d87192175671a3f0a096b4a80"} Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.663008 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.683983 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.808994 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.810275 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.810421 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.810486 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7td9\" (UniqueName: \"kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.810565 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.811487 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.811541 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd9gx\" (UniqueName: \"kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.811593 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.811664 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.811965 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812013 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812090 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812144 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts\") pod \"99620102-91cd-41b0-a17d-3fc318bef67e\" (UID: \"99620102-91cd-41b0-a17d-3fc318bef67e\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812194 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812248 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data\") pod \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\" (UID: \"3fa53b25-50bc-43dc-ba54-dcda79b2c911\") " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.812826 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.813973 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs" (OuterVolumeSpecName: "logs") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.814155 4741 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.814184 4741 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.818666 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs" (OuterVolumeSpecName: "logs") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.822442 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts" (OuterVolumeSpecName: "scripts") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.858552 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9" (OuterVolumeSpecName: "kube-api-access-z7td9") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "kube-api-access-z7td9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.859294 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab" (OuterVolumeSpecName: "glance") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "pvc-66d3046e-b0f1-49dc-a936-827184187eab". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.859750 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts" (OuterVolumeSpecName: "scripts") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.861828 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx" (OuterVolumeSpecName: "kube-api-access-fd9gx") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "kube-api-access-fd9gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.879538 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.882572 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3" (OuterVolumeSpecName: "glance") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.886337 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.902381 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data" (OuterVolumeSpecName: "config-data") pod "3fa53b25-50bc-43dc-ba54-dcda79b2c911" (UID: "3fa53b25-50bc-43dc-ba54-dcda79b2c911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916622 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7td9\" (UniqueName: \"kubernetes.io/projected/3fa53b25-50bc-43dc-ba54-dcda79b2c911-kube-api-access-z7td9\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916657 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa53b25-50bc-43dc-ba54-dcda79b2c911-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916674 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd9gx\" (UniqueName: \"kubernetes.io/projected/99620102-91cd-41b0-a17d-3fc318bef67e-kube-api-access-fd9gx\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916684 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916693 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916716 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") on node \"crc\" " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916728 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99620102-91cd-41b0-a17d-3fc318bef67e-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916738 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916747 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916756 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa53b25-50bc-43dc-ba54-dcda79b2c911-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.916776 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") on node \"crc\" " Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.928645 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data" (OuterVolumeSpecName: "config-data") pod "99620102-91cd-41b0-a17d-3fc318bef67e" (UID: "99620102-91cd-41b0-a17d-3fc318bef67e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.977666 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.977878 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3") on node "crc" Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.978198 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:40:48 crc kubenswrapper[4741]: I0226 08:40:48.978464 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-66d3046e-b0f1-49dc-a936-827184187eab" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab") on node "crc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.019577 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.019616 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99620102-91cd-41b0-a17d-3fc318bef67e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.019632 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.164412 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hfwv4" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.225769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts\") pod \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.225884 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx2qp\" (UniqueName: \"kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp\") pod \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.226047 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data\") pod \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.226234 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle\") pod \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.226304 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs\") pod \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\" (UID: \"16e2e3de-8ab8-4670-b4c5-6375011e04e7\") " Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.227572 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs" (OuterVolumeSpecName: "logs") pod "16e2e3de-8ab8-4670-b4c5-6375011e04e7" (UID: "16e2e3de-8ab8-4670-b4c5-6375011e04e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.231059 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts" (OuterVolumeSpecName: "scripts") pod "16e2e3de-8ab8-4670-b4c5-6375011e04e7" (UID: "16e2e3de-8ab8-4670-b4c5-6375011e04e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.233971 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp" (OuterVolumeSpecName: "kube-api-access-mx2qp") pod "16e2e3de-8ab8-4670-b4c5-6375011e04e7" (UID: "16e2e3de-8ab8-4670-b4c5-6375011e04e7"). InnerVolumeSpecName "kube-api-access-mx2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.264039 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16e2e3de-8ab8-4670-b4c5-6375011e04e7" (UID: "16e2e3de-8ab8-4670-b4c5-6375011e04e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.307855 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data" (OuterVolumeSpecName: "config-data") pod "16e2e3de-8ab8-4670-b4c5-6375011e04e7" (UID: "16e2e3de-8ab8-4670-b4c5-6375011e04e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.322022 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99620102-91cd-41b0-a17d-3fc318bef67e","Type":"ContainerDied","Data":"d30f9917b993daa7adfeb374a4c53831c372494c08475aab31c793ba0fbdd6d7"} Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.322098 4741 scope.go:117] "RemoveContainer" containerID="f404ed2255439d54bf37907fbe1da6f4266696999eabe3fc02e490557efb87e7" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.322123 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.331100 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.331175 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.331191 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e2e3de-8ab8-4670-b4c5-6375011e04e7-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.331203 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e2e3de-8ab8-4670-b4c5-6375011e04e7-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.331219 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx2qp\" (UniqueName: \"kubernetes.io/projected/16e2e3de-8ab8-4670-b4c5-6375011e04e7-kube-api-access-mx2qp\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.336397 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hfwv4" event={"ID":"16e2e3de-8ab8-4670-b4c5-6375011e04e7","Type":"ContainerDied","Data":"a841326b7f8c9ae960ff0d40d4e1ef18dcdef787e37f75f335c74a03b45a893d"} Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.336547 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a841326b7f8c9ae960ff0d40d4e1ef18dcdef787e37f75f335c74a03b45a893d" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.336796 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hfwv4" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.359647 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3fa53b25-50bc-43dc-ba54-dcda79b2c911","Type":"ContainerDied","Data":"7eee75140a3c84968d857b54c495c9f420d1dd091c4962d904eb472d7a1b830c"} Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.360391 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.468289 4741 scope.go:117] "RemoveContainer" containerID="22d0cba31872ed8a2065c6f7ce011e181a71a2de1b475adc157c8c47647104d2" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.472831 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.506055 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.529854 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530617 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530644 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530670 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e2e3de-8ab8-4670-b4c5-6375011e04e7" containerName="placement-db-sync" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530690 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e2e3de-8ab8-4670-b4c5-6375011e04e7" containerName="placement-db-sync" Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530722 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530732 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530759 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530768 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530785 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530794 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: E0226 08:40:49.530820 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7465b7-f180-4fe8-9c29-3e75da8c867c" containerName="oc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.530828 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7465b7-f180-4fe8-9c29-3e75da8c867c" containerName="oc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531099 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531149 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531168 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" containerName="glance-httpd" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531183 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" containerName="glance-log" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531200 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7465b7-f180-4fe8-9c29-3e75da8c867c" containerName="oc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.531216 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e2e3de-8ab8-4670-b4c5-6375011e04e7" containerName="placement-db-sync" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.532969 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.545622 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.546055 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.546206 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.546381 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8tm7v" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.546748 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.554295 4741 scope.go:117] "RemoveContainer" containerID="529283268feec021e63b81c337a94fe3f53aa09b8d012db314f5540713bf7355" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.554601 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.557929 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.573550 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.573868 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.575594 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7wdvc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.575828 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.576077 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.660827 4741 scope.go:117] "RemoveContainer" containerID="d4c6f2c4af7569f479726c16ce93172d6fa470d6a0ee0e3f944484b69a7e29a3" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663047 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663185 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663306 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663410 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663525 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663645 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.663707 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.676423 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.695584 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.695707 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.695794 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.695849 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.696003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.696051 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.696261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffrx\" (UniqueName: \"kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.731003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrmj\" (UniqueName: \"kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.781727 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.838796 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffrx\" (UniqueName: \"kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.838962 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrmj\" (UniqueName: \"kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839036 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839060 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839153 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839253 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839291 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839306 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839343 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839387 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839425 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839450 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839538 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.839632 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.843887 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.844085 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.854406 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.856027 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.856062 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/417e49831a1cdf1c972733aab859fe5a1181b30877fdb96884b853b109c5ec95/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.861741 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.862203 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.862419 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.862782 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.863012 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.863153 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.863823 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.868764 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.875840 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrmj\" (UniqueName: \"kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj\") pod \"placement-d788959bb-k7x27\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.876318 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffrx\" (UniqueName: \"kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.883874 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.962621 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa53b25-50bc-43dc-ba54-dcda79b2c911" path="/var/lib/kubelet/pods/3fa53b25-50bc-43dc-ba54-dcda79b2c911/volumes" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.964993 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99620102-91cd-41b0-a17d-3fc318bef67e" path="/var/lib/kubelet/pods/99620102-91cd-41b0-a17d-3fc318bef67e/volumes" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.966134 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.966173 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.966199 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.969315 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.973450 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.973646 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.983248 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 08:40:49 crc kubenswrapper[4741]: I0226 08:40:49.983592 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.043962 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.045830 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfbnl\" (UniqueName: \"kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.045866 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.045921 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.045947 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.046051 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.046187 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.046218 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.046275 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.050626 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.075408 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.075716 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="dnsmasq-dns" containerID="cri-o://70fded0bbc9ca37fa5937d559e33a82b74f43dd052ebc2c3988eb3ce8649c2da" gracePeriod=10 Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.159819 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.159883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.159929 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.159959 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfbnl\" (UniqueName: \"kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.159981 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.160014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.160671 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.164797 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.167894 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.168880 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.174147 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.174472 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.175721 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.177898 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.186423 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfbnl\" (UniqueName: \"kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.198796 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.198849 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66385550d4571b43986ab0832fdd5f11a5f2b8cdc4d8f3b6edc74982f484140d/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.288326 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.369824 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.414353 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerStarted","Data":"0fd7044efa0b54602e120e1b9f5c1ca6c87b2f28a517526d58d38e35b74bcbc0"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.453932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerStarted","Data":"35253af14222cce888452ee7920ee9362499af02caf6a760b00eb84b8256a002"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.469837 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerStarted","Data":"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.477638 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"5d945c3c81b4a863404ae132cffcf0fed879ebe3c2e588fcb005c39ced84572e"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.486154 4741 generic.go:334] "Generic (PLEG): container finished" podID="3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" containerID="7d2d5c20e4d9d41731ee614841e831503fb1b42a2e5f3c23d08147b0a2021c5f" exitCode=0 Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.486252 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4w2q8" event={"ID":"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d","Type":"ContainerDied","Data":"7d2d5c20e4d9d41731ee614841e831503fb1b42a2e5f3c23d08147b0a2021c5f"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.513606 4741 generic.go:334] "Generic (PLEG): container finished" podID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerID="70fded0bbc9ca37fa5937d559e33a82b74f43dd052ebc2c3988eb3ce8649c2da" exitCode=0 Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.514089 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerDied","Data":"70fded0bbc9ca37fa5937d559e33a82b74f43dd052ebc2c3988eb3ce8649c2da"} Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.520317 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cf7rh" podStartSLOduration=42.741002538000004 podStartE2EDuration="49.520297898s" podCreationTimestamp="2026-02-26 08:40:01 +0000 UTC" firstStartedPulling="2026-02-26 08:40:42.436342359 +0000 UTC m=+1677.432279746" lastFinishedPulling="2026-02-26 08:40:49.215637719 +0000 UTC m=+1684.211575106" observedRunningTime="2026-02-26 08:40:50.492297091 +0000 UTC m=+1685.488234478" watchObservedRunningTime="2026-02-26 08:40:50.520297898 +0000 UTC m=+1685.516235285" Feb 26 08:40:50 crc kubenswrapper[4741]: I0226 08:40:50.520437 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ldrfn" podStartSLOduration=21.754681196 podStartE2EDuration="28.520433381s" podCreationTimestamp="2026-02-26 08:40:22 +0000 UTC" firstStartedPulling="2026-02-26 08:40:42.436609837 +0000 UTC m=+1677.432547224" lastFinishedPulling="2026-02-26 08:40:49.202362022 +0000 UTC m=+1684.198299409" observedRunningTime="2026-02-26 08:40:50.518306011 +0000 UTC m=+1685.514243398" watchObservedRunningTime="2026-02-26 08:40:50.520433381 +0000 UTC m=+1685.516370758" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.030841 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.160264 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc8mv\" (UniqueName: \"kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv\") pod \"06589f08-740e-47d3-ae3b-a44edd8d0842\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.160320 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config\") pod \"06589f08-740e-47d3-ae3b-a44edd8d0842\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.160366 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb\") pod \"06589f08-740e-47d3-ae3b-a44edd8d0842\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.160472 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc\") pod \"06589f08-740e-47d3-ae3b-a44edd8d0842\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.160651 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb\") pod \"06589f08-740e-47d3-ae3b-a44edd8d0842\" (UID: \"06589f08-740e-47d3-ae3b-a44edd8d0842\") " Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.172912 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv" (OuterVolumeSpecName: "kube-api-access-lc8mv") pod "06589f08-740e-47d3-ae3b-a44edd8d0842" (UID: "06589f08-740e-47d3-ae3b-a44edd8d0842"). InnerVolumeSpecName "kube-api-access-lc8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.212698 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.274414 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc8mv\" (UniqueName: \"kubernetes.io/projected/06589f08-740e-47d3-ae3b-a44edd8d0842-kube-api-access-lc8mv\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:51 crc kubenswrapper[4741]: W0226 08:40:51.309290 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf6f5c31_f24d_4815_a2ed_0452b4a255b5.slice/crio-ab8348fb2ebf9fbcdbcb4415d35d39ee548939bc58ff89a9365211abb8071258 WatchSource:0}: Error finding container ab8348fb2ebf9fbcdbcb4415d35d39ee548939bc58ff89a9365211abb8071258: Status 404 returned error can't find the container with id ab8348fb2ebf9fbcdbcb4415d35d39ee548939bc58ff89a9365211abb8071258 Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.339071 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06589f08-740e-47d3-ae3b-a44edd8d0842" (UID: "06589f08-740e-47d3-ae3b-a44edd8d0842"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.378406 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.416773 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.470031 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.470118 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.476211 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06589f08-740e-47d3-ae3b-a44edd8d0842" (UID: "06589f08-740e-47d3-ae3b-a44edd8d0842"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.492013 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.502739 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06589f08-740e-47d3-ae3b-a44edd8d0842" (UID: "06589f08-740e-47d3-ae3b-a44edd8d0842"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.632637 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config" (OuterVolumeSpecName: "config") pod "06589f08-740e-47d3-ae3b-a44edd8d0842" (UID: "06589f08-740e-47d3-ae3b-a44edd8d0842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.643167 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" event={"ID":"06589f08-740e-47d3-ae3b-a44edd8d0842","Type":"ContainerDied","Data":"5f4443f7c368098f7675694d3b80f8b8e302880450da0482931d21428746d424"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.643263 4741 scope.go:117] "RemoveContainer" containerID="70fded0bbc9ca37fa5937d559e33a82b74f43dd052ebc2c3988eb3ce8649c2da" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.643784 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-p65c6" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.658202 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.704561 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8mgkv" event={"ID":"453e119a-80ff-4c19-b7d0-0860410fcc09","Type":"ContainerStarted","Data":"54d5013073b7db0d99842fab02cef1ebb2539a4cfc20185c8d026ff06e88f931"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.760845 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"0c71d0571f0d7adc09c6403f31fa0aff727a48d39f305f7c4ec4a14d8133bd87"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.760908 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"122f3da0c25ac1d9cf4a2221ec2013f7f494dacd346183358a73cc59a8d03151"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.791382 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06589f08-740e-47d3-ae3b-a44edd8d0842-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.880981 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerStarted","Data":"ab8348fb2ebf9fbcdbcb4415d35d39ee548939bc58ff89a9365211abb8071258"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.881039 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.881067 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.881084 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerStarted","Data":"9faf2ab54474609813abf9bdd4735d6869f6480f10cf0ff4197be11030a661a3"} Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.886192 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-p65c6"] Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.886603 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8mgkv" podStartSLOduration=4.148364973 podStartE2EDuration="1m11.886582876s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="2026-02-26 08:39:42.857496109 +0000 UTC m=+1617.853433496" lastFinishedPulling="2026-02-26 08:40:50.595714012 +0000 UTC m=+1685.591651399" observedRunningTime="2026-02-26 08:40:51.756078195 +0000 UTC m=+1686.752015592" watchObservedRunningTime="2026-02-26 08:40:51.886582876 +0000 UTC m=+1686.882520253" Feb 26 08:40:51 crc kubenswrapper[4741]: I0226 08:40:51.954878 4741 scope.go:117] "RemoveContainer" containerID="6cf7fb982771922d687603c5fcb704dbc28bdc9e387250c6379283c5eeb3b663" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.437499 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.595189 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cf7rh" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="registry-server" probeResult="failure" output=< Feb 26 08:40:52 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:40:52 crc kubenswrapper[4741]: > Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.625718 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2lkl\" (UniqueName: \"kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.625858 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.626012 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.626223 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.626361 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.626451 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys\") pod \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\" (UID: \"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d\") " Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.633594 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.635282 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.640326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts" (OuterVolumeSpecName: "scripts") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.640512 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl" (OuterVolumeSpecName: "kube-api-access-l2lkl") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "kube-api-access-l2lkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.715938 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.734579 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.734619 4741 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.734631 4741 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.734641 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2lkl\" (UniqueName: \"kubernetes.io/projected/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-kube-api-access-l2lkl\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.734654 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.741701 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cb84cbfff-vmnwr"] Feb 26 08:40:52 crc kubenswrapper[4741]: E0226 08:40:52.742364 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="init" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.742392 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="init" Feb 26 08:40:52 crc kubenswrapper[4741]: E0226 08:40:52.742413 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" containerName="keystone-bootstrap" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.742422 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" containerName="keystone-bootstrap" Feb 26 08:40:52 crc kubenswrapper[4741]: E0226 08:40:52.742468 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="dnsmasq-dns" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.742476 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="dnsmasq-dns" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.742763 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" containerName="keystone-bootstrap" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.742784 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" containerName="dnsmasq-dns" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.747465 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.752794 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.755392 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb84cbfff-vmnwr"] Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.757802 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.832518 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data" (OuterVolumeSpecName: "config-data") pod "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" (UID: "3fbbec2b-c3f8-406b-ac5a-6c9749b1631d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844279 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-internal-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844375 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-combined-ca-bundle\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844534 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-scripts\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844567 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-config-data\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844593 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ltl\" (UniqueName: \"kubernetes.io/projected/62a4dea8-4285-4342-9c08-a97916f65b3d-kube-api-access-v9ltl\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844623 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-public-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844681 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-fernet-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844701 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-credential-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.844802 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949489 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-config-data\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949549 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ltl\" (UniqueName: \"kubernetes.io/projected/62a4dea8-4285-4342-9c08-a97916f65b3d-kube-api-access-v9ltl\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949573 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-public-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949655 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-fernet-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949675 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-credential-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949745 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-internal-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949774 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-combined-ca-bundle\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.949881 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-scripts\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.961372 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-internal-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.967192 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-credential-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.977852 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-public-tls-certs\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.980040 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-scripts\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.982041 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-combined-ca-bundle\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.988793 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"499d45d3d74094608b45bcccf240e464e3d0da909e72391948f50f58babc4c45"} Feb 26 08:40:52 crc kubenswrapper[4741]: I0226 08:40:52.993831 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-config-data\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.002656 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a4dea8-4285-4342-9c08-a97916f65b3d-fernet-keys\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.005252 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4w2q8" event={"ID":"3fbbec2b-c3f8-406b-ac5a-6c9749b1631d","Type":"ContainerDied","Data":"2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a"} Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.005299 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc1bf08b4db58e5d54bb311b16bf1780ff932d69dbfb371516e6d05231afc3a" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.005386 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4w2q8" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.017072 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ltl\" (UniqueName: \"kubernetes.io/projected/62a4dea8-4285-4342-9c08-a97916f65b3d-kube-api-access-v9ltl\") pod \"keystone-7cb84cbfff-vmnwr\" (UID: \"62a4dea8-4285-4342-9c08-a97916f65b3d\") " pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.023511 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8n574" event={"ID":"61912c33-f4b2-4d1e-a2a0-df63c70ac97f","Type":"ContainerStarted","Data":"e271dbba8ceafff7070556a9155960d7e34b9f2b3a174e19e71ea3e64b902b55"} Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.034863 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerStarted","Data":"361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f"} Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.039464 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerStarted","Data":"14e13d3a04d78dc5855a333c9dba693ba435435dc8b8e2b7700d6d62889e4eff"} Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.083289 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.085225 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.122531 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-8n574" podStartSLOduration=4.083849445 podStartE2EDuration="1m13.122503638s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="2026-02-26 08:39:42.339565489 +0000 UTC m=+1617.335502876" lastFinishedPulling="2026-02-26 08:40:51.378219682 +0000 UTC m=+1686.374157069" observedRunningTime="2026-02-26 08:40:53.089725486 +0000 UTC m=+1688.085662873" watchObservedRunningTime="2026-02-26 08:40:53.122503638 +0000 UTC m=+1688.118441025" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.140066 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.846201 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06589f08-740e-47d3-ae3b-a44edd8d0842" path="/var/lib/kubelet/pods/06589f08-740e-47d3-ae3b-a44edd8d0842/volumes" Feb 26 08:40:53 crc kubenswrapper[4741]: I0226 08:40:53.984432 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb84cbfff-vmnwr"] Feb 26 08:40:53 crc kubenswrapper[4741]: W0226 08:40:53.985962 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62a4dea8_4285_4342_9c08_a97916f65b3d.slice/crio-ae633e5af1a91cb5bc77807cd3a8fb83ee9a3fd539c5d92d11d4b985f889ad2f WatchSource:0}: Error finding container ae633e5af1a91cb5bc77807cd3a8fb83ee9a3fd539c5d92d11d4b985f889ad2f: Status 404 returned error can't find the container with id ae633e5af1a91cb5bc77807cd3a8fb83ee9a3fd539c5d92d11d4b985f889ad2f Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.009266 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:40:54 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:40:54 crc kubenswrapper[4741]: > Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.121285 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerStarted","Data":"ff35a78726fff2360bc9fbdccade7edc73ebd043bc0c75b111328b4095a4f92c"} Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.122724 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.122830 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.128201 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerStarted","Data":"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25"} Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.140071 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerStarted","Data":"5ff9dd60095d3da8aa776b3fc7a726cf8f951fe745f5f8e9b6c4455cc9c0d517"} Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.145924 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb84cbfff-vmnwr" event={"ID":"62a4dea8-4285-4342-9c08-a97916f65b3d","Type":"ContainerStarted","Data":"ae633e5af1a91cb5bc77807cd3a8fb83ee9a3fd539c5d92d11d4b985f889ad2f"} Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.183745 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d788959bb-k7x27" podStartSLOduration=5.183722003 podStartE2EDuration="5.183722003s" podCreationTimestamp="2026-02-26 08:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:54.143637993 +0000 UTC m=+1689.139575380" watchObservedRunningTime="2026-02-26 08:40:54.183722003 +0000 UTC m=+1689.179659390" Feb 26 08:40:54 crc kubenswrapper[4741]: I0226 08:40:54.257458 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ldrfn" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" probeResult="failure" output=< Feb 26 08:40:54 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:40:54 crc kubenswrapper[4741]: > Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.148755 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.149621 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.149675 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.150835 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.150904 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" gracePeriod=600 Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.174119 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wbg8p" event={"ID":"4d1395bb-ffb5-492e-b214-4434c210acf7","Type":"ContainerStarted","Data":"7a76fc68ece25bc32a4f24d0da7b48e65616e40b69139b1ac12435a66b804ab3"} Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.195074 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerStarted","Data":"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853"} Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.200203 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerStarted","Data":"0be46b640e17fc99fc27314c3e7c4a24df15f4ecd27fb7e60d39cf9922c4f98b"} Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.223864 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb84cbfff-vmnwr" event={"ID":"62a4dea8-4285-4342-9c08-a97916f65b3d","Type":"ContainerStarted","Data":"da46c7ee9a72bfc02c44a6c5fbf515f9bd529643a02df249eb4f9284c78ccdff"} Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.223894 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wbg8p" podStartSLOduration=5.3243675249999995 podStartE2EDuration="1m15.223871718s" podCreationTimestamp="2026-02-26 08:39:40 +0000 UTC" firstStartedPulling="2026-02-26 08:39:42.805518745 +0000 UTC m=+1617.801456132" lastFinishedPulling="2026-02-26 08:40:52.705022938 +0000 UTC m=+1687.700960325" observedRunningTime="2026-02-26 08:40:55.199449674 +0000 UTC m=+1690.195387061" watchObservedRunningTime="2026-02-26 08:40:55.223871718 +0000 UTC m=+1690.219809105" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.224172 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.253730 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.253705016 podStartE2EDuration="6.253705016s" podCreationTimestamp="2026-02-26 08:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:55.244876735 +0000 UTC m=+1690.240814122" watchObservedRunningTime="2026-02-26 08:40:55.253705016 +0000 UTC m=+1690.249642423" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.301684 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cb84cbfff-vmnwr" podStartSLOduration=3.30165874 podStartE2EDuration="3.30165874s" podCreationTimestamp="2026-02-26 08:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:55.284281616 +0000 UTC m=+1690.280219013" watchObservedRunningTime="2026-02-26 08:40:55.30165874 +0000 UTC m=+1690.297596127" Feb 26 08:40:55 crc kubenswrapper[4741]: I0226 08:40:55.331768 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.331738815 podStartE2EDuration="6.331738815s" podCreationTimestamp="2026-02-26 08:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:40:55.320752063 +0000 UTC m=+1690.316689460" watchObservedRunningTime="2026-02-26 08:40:55.331738815 +0000 UTC m=+1690.327676202" Feb 26 08:40:55 crc kubenswrapper[4741]: E0226 08:40:55.353275 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.249197 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" exitCode=0 Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.249285 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333"} Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.249817 4741 scope.go:117] "RemoveContainer" containerID="288a5333cab594a75e3a28112d2f250579a2bdc002b7db4ded270dcedecce3e8" Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.250326 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:40:56 crc kubenswrapper[4741]: E0226 08:40:56.250677 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.288586 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"638437b049d8ddab2139771183c8aff4c5342294372570ebdb37087e28070c23"} Feb 26 08:40:56 crc kubenswrapper[4741]: I0226 08:40:56.288638 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"07c8e9b978f5ae12c4bd136620550eb7f3e0c76277da0bea129810353f47a689"} Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.315875 4741 generic.go:334] "Generic (PLEG): container finished" podID="03befef7-03ec-47b0-b178-46e527d8198e" containerID="6054d73673f66b7d855edaf87e94603cff02a30938e643fbfdca26b130d21776" exitCode=0 Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.316385 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sd2kk" event={"ID":"03befef7-03ec-47b0-b178-46e527d8198e","Type":"ContainerDied","Data":"6054d73673f66b7d855edaf87e94603cff02a30938e643fbfdca26b130d21776"} Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.369686 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"ee0a65aedd70065d6a37e9c3fc0956c864b296620276eda9a6978f057c40150b"} Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.369740 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"cb78a31218bdce30bcc8ce57706e972404f2315fdc86986496050090356c3919"} Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.369751 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"1f9ddd55c166ab477fbe2bc22c36615c2d8ae0e5d3ad5238f4516d3681cf45bb"} Feb 26 08:40:57 crc kubenswrapper[4741]: I0226 08:40:57.369763 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"13e4eab652ec6f358bb9b9644227ffcca2655cdd9943632d01c69910d5b50003"} Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.386603 4741 generic.go:334] "Generic (PLEG): container finished" podID="453e119a-80ff-4c19-b7d0-0860410fcc09" containerID="54d5013073b7db0d99842fab02cef1ebb2539a4cfc20185c8d026ff06e88f931" exitCode=0 Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.386827 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8mgkv" event={"ID":"453e119a-80ff-4c19-b7d0-0860410fcc09","Type":"ContainerDied","Data":"54d5013073b7db0d99842fab02cef1ebb2539a4cfc20185c8d026ff06e88f931"} Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.412540 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91b0231b-fbdf-4714-ac14-d3621c8c7807","Type":"ContainerStarted","Data":"f769cec5e839b51eecf942805e11b4542f9c13ff08b9e54c05d12dbbc5996942"} Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.493566 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=171.701114165 podStartE2EDuration="3m6.493536138s" podCreationTimestamp="2026-02-26 08:37:52 +0000 UTC" firstStartedPulling="2026-02-26 08:40:40.439044671 +0000 UTC m=+1675.434982048" lastFinishedPulling="2026-02-26 08:40:55.231466634 +0000 UTC m=+1690.227404021" observedRunningTime="2026-02-26 08:40:58.45000317 +0000 UTC m=+1693.445940557" watchObservedRunningTime="2026-02-26 08:40:58.493536138 +0000 UTC m=+1693.489473525" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.830471 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.833293 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.834490 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.861585 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.885762 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.885855 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.885928 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.886068 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.886178 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.886210 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbqr\" (UniqueName: \"kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989225 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989376 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989436 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989471 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbqr\" (UniqueName: \"kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989548 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.989601 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.991046 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.991082 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.995144 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:58 crc kubenswrapper[4741]: I0226 08:40:58.995168 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:59 crc kubenswrapper[4741]: I0226 08:40:59.000593 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:59 crc kubenswrapper[4741]: I0226 08:40:59.015208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbqr\" (UniqueName: \"kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr\") pod \"dnsmasq-dns-8b5c85b87-cgzpz\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:40:59 crc kubenswrapper[4741]: I0226 08:40:59.176733 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.051491 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.051842 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.100055 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.114529 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.373011 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.386941 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.448958 4741 generic.go:334] "Generic (PLEG): container finished" podID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" containerID="e271dbba8ceafff7070556a9155960d7e34b9f2b3a174e19e71ea3e64b902b55" exitCode=0 Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.450730 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8n574" event={"ID":"61912c33-f4b2-4d1e-a2a0-df63c70ac97f","Type":"ContainerDied","Data":"e271dbba8ceafff7070556a9155960d7e34b9f2b3a174e19e71ea3e64b902b55"} Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.453792 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.453916 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.454869 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:00 crc kubenswrapper[4741]: I0226 08:41:00.485709 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 08:41:01 crc kubenswrapper[4741]: I0226 08:41:01.493904 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 08:41:01 crc kubenswrapper[4741]: I0226 08:41:01.494357 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 08:41:01 crc kubenswrapper[4741]: I0226 08:41:01.551727 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:41:01 crc kubenswrapper[4741]: I0226 08:41:01.629018 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.343408 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.512799 4741 generic.go:334] "Generic (PLEG): container finished" podID="4d1395bb-ffb5-492e-b214-4434c210acf7" containerID="7a76fc68ece25bc32a4f24d0da7b48e65616e40b69139b1ac12435a66b804ab3" exitCode=0 Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.512929 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.512951 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.513008 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wbg8p" event={"ID":"4d1395bb-ffb5-492e-b214-4434c210acf7","Type":"ContainerDied","Data":"7a76fc68ece25bc32a4f24d0da7b48e65616e40b69139b1ac12435a66b804ab3"} Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.893419 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.903324 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.930535 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8n574" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.968896 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d79v9\" (UniqueName: \"kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9\") pod \"453e119a-80ff-4c19-b7d0-0860410fcc09\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969057 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle\") pod \"453e119a-80ff-4c19-b7d0-0860410fcc09\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969235 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzmvv\" (UniqueName: \"kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv\") pod \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969349 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle\") pod \"03befef7-03ec-47b0-b178-46e527d8198e\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969390 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle\") pod \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969513 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data\") pod \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\" (UID: \"61912c33-f4b2-4d1e-a2a0-df63c70ac97f\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969544 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config\") pod \"03befef7-03ec-47b0-b178-46e527d8198e\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969593 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6wn8\" (UniqueName: \"kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8\") pod \"03befef7-03ec-47b0-b178-46e527d8198e\" (UID: \"03befef7-03ec-47b0-b178-46e527d8198e\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.969773 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data\") pod \"453e119a-80ff-4c19-b7d0-0860410fcc09\" (UID: \"453e119a-80ff-4c19-b7d0-0860410fcc09\") " Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.979880 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "453e119a-80ff-4c19-b7d0-0860410fcc09" (UID: "453e119a-80ff-4c19-b7d0-0860410fcc09"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.984403 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8" (OuterVolumeSpecName: "kube-api-access-d6wn8") pod "03befef7-03ec-47b0-b178-46e527d8198e" (UID: "03befef7-03ec-47b0-b178-46e527d8198e"). InnerVolumeSpecName "kube-api-access-d6wn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:02 crc kubenswrapper[4741]: I0226 08:41:02.994165 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9" (OuterVolumeSpecName: "kube-api-access-d79v9") pod "453e119a-80ff-4c19-b7d0-0860410fcc09" (UID: "453e119a-80ff-4c19-b7d0-0860410fcc09"). InnerVolumeSpecName "kube-api-access-d79v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.000336 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv" (OuterVolumeSpecName: "kube-api-access-vzmvv") pod "61912c33-f4b2-4d1e-a2a0-df63c70ac97f" (UID: "61912c33-f4b2-4d1e-a2a0-df63c70ac97f"). InnerVolumeSpecName "kube-api-access-vzmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.028137 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61912c33-f4b2-4d1e-a2a0-df63c70ac97f" (UID: "61912c33-f4b2-4d1e-a2a0-df63c70ac97f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.038893 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config" (OuterVolumeSpecName: "config") pod "03befef7-03ec-47b0-b178-46e527d8198e" (UID: "03befef7-03ec-47b0-b178-46e527d8198e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.047384 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "453e119a-80ff-4c19-b7d0-0860410fcc09" (UID: "453e119a-80ff-4c19-b7d0-0860410fcc09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.059202 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03befef7-03ec-47b0-b178-46e527d8198e" (UID: "03befef7-03ec-47b0-b178-46e527d8198e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077440 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077476 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzmvv\" (UniqueName: \"kubernetes.io/projected/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-kube-api-access-vzmvv\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077489 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077498 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077507 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/03befef7-03ec-47b0-b178-46e527d8198e-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077518 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6wn8\" (UniqueName: \"kubernetes.io/projected/03befef7-03ec-47b0-b178-46e527d8198e-kube-api-access-d6wn8\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077526 4741 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/453e119a-80ff-4c19-b7d0-0860410fcc09-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.077534 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d79v9\" (UniqueName: \"kubernetes.io/projected/453e119a-80ff-4c19-b7d0-0860410fcc09-kube-api-access-d79v9\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.110798 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data" (OuterVolumeSpecName: "config-data") pod "61912c33-f4b2-4d1e-a2a0-df63c70ac97f" (UID: "61912c33-f4b2-4d1e-a2a0-df63c70ac97f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.182844 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61912c33-f4b2-4d1e-a2a0-df63c70ac97f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.530583 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sd2kk" event={"ID":"03befef7-03ec-47b0-b178-46e527d8198e","Type":"ContainerDied","Data":"ced821b839b90d07b9d32119af5083a922733851f0329ce5bc6aacbfc6256dd1"} Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.530724 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sd2kk" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.530733 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced821b839b90d07b9d32119af5083a922733851f0329ce5bc6aacbfc6256dd1" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.535514 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8n574" event={"ID":"61912c33-f4b2-4d1e-a2a0-df63c70ac97f","Type":"ContainerDied","Data":"45f80b8094b718384643a69d98ef605ae9e49841d13f2b0b755705dcf2834026"} Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.535664 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f80b8094b718384643a69d98ef605ae9e49841d13f2b0b755705dcf2834026" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.535903 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8n574" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.541010 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8mgkv" event={"ID":"453e119a-80ff-4c19-b7d0-0860410fcc09","Type":"ContainerDied","Data":"f4715375cac84d4b8445d494ceda2a169085cb35dc95c8ff165aa70f1e1c0e54"} Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.541075 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4715375cac84d4b8445d494ceda2a169085cb35dc95c8ff165aa70f1e1c0e54" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.541222 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8mgkv" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.541307 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cf7rh" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="registry-server" containerID="cri-o://35253af14222cce888452ee7920ee9362499af02caf6a760b00eb84b8256a002" gracePeriod=2 Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.541990 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.542015 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:03 crc kubenswrapper[4741]: I0226 08:41:03.969391 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:03 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:03 crc kubenswrapper[4741]: > Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.145900 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ldrfn" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:04 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:04 crc kubenswrapper[4741]: > Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.229990 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.334248 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-768d8b48ff-xfq8n"] Feb 26 08:41:04 crc kubenswrapper[4741]: E0226 08:41:04.334935 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03befef7-03ec-47b0-b178-46e527d8198e" containerName="neutron-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.334950 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="03befef7-03ec-47b0-b178-46e527d8198e" containerName="neutron-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: E0226 08:41:04.334970 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" containerName="barbican-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.334976 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" containerName="barbican-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: E0226 08:41:04.335016 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" containerName="heat-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.335024 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" containerName="heat-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.335265 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" containerName="heat-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.335276 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" containerName="barbican-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.335290 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="03befef7-03ec-47b0-b178-46e527d8198e" containerName="neutron-db-sync" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.339886 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.346946 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.347707 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrgx" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.347892 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.395435 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c4c8dc778-mb68n"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.402569 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.407022 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.436654 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437409 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-combined-ca-bundle\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437489 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data-custom\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437579 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gz5l\" (UniqueName: \"kubernetes.io/projected/4410803b-98d4-4e00-a854-4427dd5d3ebc-kube-api-access-2gz5l\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437614 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437645 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.437981 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-logs\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.438084 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4410803b-98d4-4e00-a854-4427dd5d3ebc-logs\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.438244 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mdsl\" (UniqueName: \"kubernetes.io/projected/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-kube-api-access-7mdsl\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.438328 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data-custom\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.471520 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.509179 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.541915 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-logs\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4410803b-98d4-4e00-a854-4427dd5d3ebc-logs\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542082 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mdsl\" (UniqueName: \"kubernetes.io/projected/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-kube-api-access-7mdsl\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542121 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data-custom\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542194 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542217 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-combined-ca-bundle\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542264 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data-custom\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542320 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gz5l\" (UniqueName: \"kubernetes.io/projected/4410803b-98d4-4e00-a854-4427dd5d3ebc-kube-api-access-2gz5l\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542347 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.542372 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.557149 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-combined-ca-bundle\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.582720 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-768d8b48ff-xfq8n"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.582743 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data-custom\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.583099 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-config-data\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.584067 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.590352 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gz5l\" (UniqueName: \"kubernetes.io/projected/4410803b-98d4-4e00-a854-4427dd5d3ebc-kube-api-access-2gz5l\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.595921 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4410803b-98d4-4e00-a854-4427dd5d3ebc-config-data-custom\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.596846 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4410803b-98d4-4e00-a854-4427dd5d3ebc-logs\") pod \"barbican-worker-768d8b48ff-xfq8n\" (UID: \"4410803b-98d4-4e00-a854-4427dd5d3ebc\") " pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.597334 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-logs\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.610404 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.636532 4741 generic.go:334] "Generic (PLEG): container finished" podID="959240c9-4eb8-4236-9927-758997ebf0a0" containerID="35253af14222cce888452ee7920ee9362499af02caf6a760b00eb84b8256a002" exitCode=0 Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.636587 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerDied","Data":"35253af14222cce888452ee7920ee9362499af02caf6a760b00eb84b8256a002"} Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645192 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645253 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645404 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrr5q\" (UniqueName: \"kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645468 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645491 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.645516 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.651139 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mdsl\" (UniqueName: \"kubernetes.io/projected/8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b-kube-api-access-7mdsl\") pod \"barbican-keystone-listener-7c4c8dc778-mb68n\" (UID: \"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b\") " pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.659370 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c4c8dc778-mb68n"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.699060 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-768d8b48ff-xfq8n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.782259 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.783493 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrr5q\" (UniqueName: \"kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.783666 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.783703 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.783746 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.783993 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.784019 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.785328 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.792376 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.842528 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrr5q\" (UniqueName: \"kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.849347 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.851923 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.852860 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.888369 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-s8b4c\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.896599 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.975577 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:04 crc kubenswrapper[4741]: I0226 08:41:04.993489 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.017394 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.038542 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.053280 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.066074 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.066908 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.074174 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.080136 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.083564 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.089591 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.089708 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zhhb5" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.089826 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.090496 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.097446 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103524 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7p2\" (UniqueName: \"kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103636 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103670 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103721 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.103938 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208270 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7p2\" (UniqueName: \"kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208355 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208389 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208422 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208460 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208496 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65zr\" (UniqueName: \"kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208528 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208560 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208589 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208619 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.208659 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209299 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209380 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsfb7\" (UniqueName: \"kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209464 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209571 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209749 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209912 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.209955 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.210813 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.211010 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.250296 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7p2\" (UniqueName: \"kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2\") pod \"dnsmasq-dns-75c8ddd69c-tv7bp\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.312777 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsfb7\" (UniqueName: \"kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.312852 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.312878 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.312906 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.312945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313041 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65zr\" (UniqueName: \"kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313059 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313080 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313101 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.313806 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.320253 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.320419 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.320574 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.323079 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.326184 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.326463 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.332769 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65zr\" (UniqueName: \"kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr\") pod \"neutron-864786857b-lmwmt\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.332853 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.335736 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsfb7\" (UniqueName: \"kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7\") pod \"barbican-api-66bc8c7d7d-sd4nm\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.350022 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.392227 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:05 crc kubenswrapper[4741]: I0226 08:41:05.407294 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.034302 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.034496 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.047914 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.129067 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.129636 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.154102 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.158768 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239364 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239497 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239563 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5hpc\" (UniqueName: \"kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239735 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239793 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.239998 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data\") pod \"4d1395bb-ffb5-492e-b214-4434c210acf7\" (UID: \"4d1395bb-ffb5-492e-b214-4434c210acf7\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.241996 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.259633 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts" (OuterVolumeSpecName: "scripts") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.260552 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.264550 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc" (OuterVolumeSpecName: "kube-api-access-d5hpc") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "kube-api-access-d5hpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.349570 4741 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.352758 4741 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d1395bb-ffb5-492e-b214-4434c210acf7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.352952 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.353044 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5hpc\" (UniqueName: \"kubernetes.io/projected/4d1395bb-ffb5-492e-b214-4434c210acf7-kube-api-access-d5hpc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.398058 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.458825 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.534994 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data" (OuterVolumeSpecName: "config-data") pod "4d1395bb-ffb5-492e-b214-4434c210acf7" (UID: "4d1395bb-ffb5-492e-b214-4434c210acf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.566950 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1395bb-ffb5-492e-b214-4434c210acf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.690152 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wbg8p" event={"ID":"4d1395bb-ffb5-492e-b214-4434c210acf7","Type":"ContainerDied","Data":"d410268ff833a525cd6d70d31f012a74070dcca5901407ca05d61498769650e2"} Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.690206 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d410268ff833a525cd6d70d31f012a74070dcca5901407ca05d61498769650e2" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.690258 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wbg8p" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.696028 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cf7rh" event={"ID":"959240c9-4eb8-4236-9927-758997ebf0a0","Type":"ContainerDied","Data":"389e4a86930dc265119eae203935a1e897cc73f6bb54911480ef21b190aa1eb5"} Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.696079 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389e4a86930dc265119eae203935a1e897cc73f6bb54911480ef21b190aa1eb5" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.842322 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:41:06 crc kubenswrapper[4741]: E0226 08:41:06.947990 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.984960 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gswb\" (UniqueName: \"kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb\") pod \"959240c9-4eb8-4236-9927-758997ebf0a0\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.985275 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities\") pod \"959240c9-4eb8-4236-9927-758997ebf0a0\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.985709 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content\") pod \"959240c9-4eb8-4236-9927-758997ebf0a0\" (UID: \"959240c9-4eb8-4236-9927-758997ebf0a0\") " Feb 26 08:41:06 crc kubenswrapper[4741]: I0226 08:41:06.990097 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities" (OuterVolumeSpecName: "utilities") pod "959240c9-4eb8-4236-9927-758997ebf0a0" (UID: "959240c9-4eb8-4236-9927-758997ebf0a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.015523 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb" (OuterVolumeSpecName: "kube-api-access-8gswb") pod "959240c9-4eb8-4236-9927-758997ebf0a0" (UID: "959240c9-4eb8-4236-9927-758997ebf0a0"). InnerVolumeSpecName "kube-api-access-8gswb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.050265 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d6b948c9c-pm7qf"] Feb 26 08:41:07 crc kubenswrapper[4741]: E0226 08:41:07.055333 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="extract-utilities" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.055804 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="extract-utilities" Feb 26 08:41:07 crc kubenswrapper[4741]: E0226 08:41:07.055833 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="extract-content" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.055844 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="extract-content" Feb 26 08:41:07 crc kubenswrapper[4741]: E0226 08:41:07.055872 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="registry-server" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.055881 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="registry-server" Feb 26 08:41:07 crc kubenswrapper[4741]: E0226 08:41:07.055929 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" containerName="cinder-db-sync" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.055937 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" containerName="cinder-db-sync" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.056421 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" containerName="cinder-db-sync" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.056459 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" containerName="registry-server" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.061619 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.064373 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "959240c9-4eb8-4236-9927-758997ebf0a0" (UID: "959240c9-4eb8-4236-9927-758997ebf0a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.069642 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.069961 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.077642 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d6b948c9c-pm7qf"] Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.090961 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gswb\" (UniqueName: \"kubernetes.io/projected/959240c9-4eb8-4236-9927-758997ebf0a0-kube-api-access-8gswb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.090994 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.091005 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959240c9-4eb8-4236-9927-758997ebf0a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.125979 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.194119 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-public-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.194266 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzht\" (UniqueName: \"kubernetes.io/projected/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-kube-api-access-9zzht\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.194339 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-ovndb-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.194593 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-internal-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.194743 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.195003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-httpd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.195089 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-combined-ca-bundle\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299093 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-ovndb-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299644 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-internal-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299698 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299825 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-httpd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299852 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-combined-ca-bundle\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.299974 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-public-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.300141 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzht\" (UniqueName: \"kubernetes.io/projected/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-kube-api-access-9zzht\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.304607 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-ovndb-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.313284 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-internal-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.316636 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-combined-ca-bundle\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.320263 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-public-tls-certs\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.338128 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.338307 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzht\" (UniqueName: \"kubernetes.io/projected/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-kube-api-access-9zzht\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.348349 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffb31ef3-acf3-4fc6-83a4-2a898da5dffd-httpd-config\") pod \"neutron-d6b948c9c-pm7qf\" (UID: \"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd\") " pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.408836 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.547509 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.551607 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.556570 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.557163 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.557328 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.557552 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pthp9" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.595572 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.742834 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.832381 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.849080 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.849266 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.849410 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.849638 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:07 crc kubenswrapper[4741]: I0226 08:41:07.849674 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfq2f\" (UniqueName: \"kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061058 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061138 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfq2f\" (UniqueName: \"kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061487 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061555 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061692 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.061925 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.087194 4741 generic.go:334] "Generic (PLEG): container finished" podID="c695186e-c672-45a4-933d-7f1546c18090" containerID="0a96478fa123cc762053b2d494a7eecf65aa5cfb4abf00670b6846db1d8d1ea7" exitCode=0 Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.134932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" event={"ID":"c695186e-c672-45a4-933d-7f1546c18090","Type":"ContainerDied","Data":"0a96478fa123cc762053b2d494a7eecf65aa5cfb4abf00670b6846db1d8d1ea7"} Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.134980 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" event={"ID":"c695186e-c672-45a4-933d-7f1546c18090","Type":"ContainerStarted","Data":"39794452aa44cedc8b7ed350cfa574397ca2676394e7085fc3deedf7832aac5f"} Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.156995 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.181189 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c4c8dc778-mb68n"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.232705 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cf7rh" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.235238 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="ceilometer-notification-agent" containerID="cri-o://9941961b65e63fc70e1e3738177edfe9571f1ed1b7322139645f4a9845b664bd" gracePeriod=30 Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.235387 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerStarted","Data":"1e841647653f52aeb9b96363827078ce4640c502697c9752039404552f89c88f"} Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.235441 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.235486 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="proxy-httpd" containerID="cri-o://1e841647653f52aeb9b96363827078ce4640c502697c9752039404552f89c88f" gracePeriod=30 Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.235546 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="sg-core" containerID="cri-o://0fd7044efa0b54602e120e1b9f5c1ca6c87b2f28a517526d58d38e35b74bcbc0" gracePeriod=30 Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.247014 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfq2f\" (UniqueName: \"kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.260259 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.260496 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.284955 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.307196 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.351184 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.362039 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.400632 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.403187 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.519247 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528515 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528594 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528669 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vgl\" (UniqueName: \"kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528736 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528797 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.528905 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.568514 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-768d8b48ff-xfq8n"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.634489 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.638271 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.639679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.639759 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.639827 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.639966 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5vgl\" (UniqueName: \"kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.640224 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.640342 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.641085 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.642468 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.643352 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.643887 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.659847 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.662408 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.669551 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.683368 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5vgl\" (UniqueName: \"kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl\") pod \"dnsmasq-dns-5784cf869f-n2qqk\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.712182 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.755929 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.779750 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cf7rh"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.801247 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.853193 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.857788 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.858042 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.858229 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.858407 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.860853 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.861317 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.861346 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw4h\" (UniqueName: \"kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: W0226 08:41:08.873650 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86642216_602d_42f0_81f7_4834499a7539.slice/crio-f5b216f1e3786dc860a223903766f408aea5584a6f909da343bb7d339a6e6dfb WatchSource:0}: Error finding container f5b216f1e3786dc860a223903766f408aea5584a6f909da343bb7d339a6e6dfb: Status 404 returned error can't find the container with id f5b216f1e3786dc860a223903766f408aea5584a6f909da343bb7d339a6e6dfb Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.965862 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.965929 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.965954 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.966068 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.966093 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfw4h\" (UniqueName: \"kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.966148 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.966188 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.972058 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.973699 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.974184 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:08 crc kubenswrapper[4741]: I0226 08:41:08.976493 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:08.989004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.004334 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.019875 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfw4h\" (UniqueName: \"kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h\") pod \"cinder-api-0\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " pod="openstack/cinder-api-0" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.092017 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.237652 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d6b948c9c-pm7qf"] Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.252957 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.300893 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbbqr\" (UniqueName: \"kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.301065 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.301205 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.301500 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.301575 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.301614 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb\") pod \"c695186e-c672-45a4-933d-7f1546c18090\" (UID: \"c695186e-c672-45a4-933d-7f1546c18090\") " Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.311153 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerStarted","Data":"f5b216f1e3786dc860a223903766f408aea5584a6f909da343bb7d339a6e6dfb"} Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.336710 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr" (OuterVolumeSpecName: "kube-api-access-kbbqr") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "kube-api-access-kbbqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.358641 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerStarted","Data":"b1de0d3381b23f8e31d5cd1d2ff55d186f3d74707c54681016630c4961a3ef0f"} Feb 26 08:41:09 crc kubenswrapper[4741]: I0226 08:41:09.359608 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config" (OuterVolumeSpecName: "config") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.375005 4741 generic.go:334] "Generic (PLEG): container finished" podID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerID="1e841647653f52aeb9b96363827078ce4640c502697c9752039404552f89c88f" exitCode=0 Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.375052 4741 generic.go:334] "Generic (PLEG): container finished" podID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerID="0fd7044efa0b54602e120e1b9f5c1ca6c87b2f28a517526d58d38e35b74bcbc0" exitCode=2 Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.375153 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerDied","Data":"1e841647653f52aeb9b96363827078ce4640c502697c9752039404552f89c88f"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.375227 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerDied","Data":"0fd7044efa0b54602e120e1b9f5c1ca6c87b2f28a517526d58d38e35b74bcbc0"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.375557 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.376746 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.378151 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768d8b48ff-xfq8n" event={"ID":"4410803b-98d4-4e00-a854-4427dd5d3ebc","Type":"ContainerStarted","Data":"d88b7806244cbab38bdbe9409be50814edac520c1c58cbad785a4a3cbc2e5c71"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.383204 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" event={"ID":"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b","Type":"ContainerStarted","Data":"5b31307c27ad96c536e2feb6885d5ae2a632e1b886bb20f0d3b6b79664ec427f"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.386510 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.397647 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c695186e-c672-45a4-933d-7f1546c18090" (UID: "c695186e-c672-45a4-933d-7f1546c18090"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.400872 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" event={"ID":"3bc55121-fbed-47b1-93b8-ffe02186eceb","Type":"ContainerStarted","Data":"d02fa773b35b2b5f9f709962e8e28630fa4957766284028506473cae8ad25a4c"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411397 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbbqr\" (UniqueName: \"kubernetes.io/projected/c695186e-c672-45a4-933d-7f1546c18090-kube-api-access-kbbqr\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411434 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411447 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411461 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411472 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.411485 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c695186e-c672-45a4-933d-7f1546c18090-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.423988 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" event={"ID":"cdd17750-029b-4a96-84ea-8e577dd288c2","Type":"ContainerStarted","Data":"53842381f23dd70e7a0a0f729b3e7cfb9ba469451280e288f5a1f4766bcf3168"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.440877 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" event={"ID":"c695186e-c672-45a4-933d-7f1546c18090","Type":"ContainerDied","Data":"39794452aa44cedc8b7ed350cfa574397ca2676394e7085fc3deedf7832aac5f"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.440945 4741 scope.go:117] "RemoveContainer" containerID="0a96478fa123cc762053b2d494a7eecf65aa5cfb4abf00670b6846db1d8d1ea7" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.441009 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cgzpz" Feb 26 08:41:10 crc kubenswrapper[4741]: W0226 08:41:09.465063 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb31ef3_acf3_4fc6_83a4_2a898da5dffd.slice/crio-934ff5f420bb7b212eea8bf05c5b4da7f6aeef4200bced72c371f6e0f4d8b238 WatchSource:0}: Error finding container 934ff5f420bb7b212eea8bf05c5b4da7f6aeef4200bced72c371f6e0f4d8b238: Status 404 returned error can't find the container with id 934ff5f420bb7b212eea8bf05c5b4da7f6aeef4200bced72c371f6e0f4d8b238 Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.467820 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:10 crc kubenswrapper[4741]: W0226 08:41:09.480745 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode353365b_5d76_48b5_b500_6aa0ced0a15d.slice/crio-4ffead3a0d9e126e69b6a27f761fd3b70609ce00ac96ac761d7713b49f14e2fd WatchSource:0}: Error finding container 4ffead3a0d9e126e69b6a27f761fd3b70609ce00ac96ac761d7713b49f14e2fd: Status 404 returned error can't find the container with id 4ffead3a0d9e126e69b6a27f761fd3b70609ce00ac96ac761d7713b49f14e2fd Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.657162 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.679846 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cgzpz"] Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.823517 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959240c9-4eb8-4236-9927-758997ebf0a0" path="/var/lib/kubelet/pods/959240c9-4eb8-4236-9927-758997ebf0a0/volumes" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:09.824741 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c695186e-c672-45a4-933d-7f1546c18090" path="/var/lib/kubelet/pods/c695186e-c672-45a4-933d-7f1546c18090/volumes" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.463630 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerStarted","Data":"4ffead3a0d9e126e69b6a27f761fd3b70609ce00ac96ac761d7713b49f14e2fd"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.473532 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerStarted","Data":"05a28ce2fb11c6716fdfd3b5ec7204d0d1363f66b1c9de4094beaf75133c02ca"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.483443 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerStarted","Data":"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.483823 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerStarted","Data":"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.485610 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.485767 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.490092 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d6b948c9c-pm7qf" event={"ID":"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd","Type":"ContainerStarted","Data":"934ff5f420bb7b212eea8bf05c5b4da7f6aeef4200bced72c371f6e0f4d8b238"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.495344 4741 generic.go:334] "Generic (PLEG): container finished" podID="3bc55121-fbed-47b1-93b8-ffe02186eceb" containerID="00fd4fe6ba0611e7e15cf3dda57a77f74885453279eefb2563b8d31c7ec954c0" exitCode=0 Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.495603 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" event={"ID":"3bc55121-fbed-47b1-93b8-ffe02186eceb","Type":"ContainerDied","Data":"00fd4fe6ba0611e7e15cf3dda57a77f74885453279eefb2563b8d31c7ec954c0"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.503561 4741 generic.go:334] "Generic (PLEG): container finished" podID="cdd17750-029b-4a96-84ea-8e577dd288c2" containerID="f4c64e55bc734f2d32bf69d64ee4ed0cc533b9faffc4491fc3d3baf64f135790" exitCode=0 Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.503719 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" event={"ID":"cdd17750-029b-4a96-84ea-8e577dd288c2","Type":"ContainerDied","Data":"f4c64e55bc734f2d32bf69d64ee4ed0cc533b9faffc4491fc3d3baf64f135790"} Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.532727 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" podStartSLOduration=6.532702968 podStartE2EDuration="6.532702968s" podCreationTimestamp="2026-02-26 08:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:10.504352732 +0000 UTC m=+1705.500290129" watchObservedRunningTime="2026-02-26 08:41:10.532702968 +0000 UTC m=+1705.528640355" Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.742925 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:41:10 crc kubenswrapper[4741]: I0226 08:41:10.788041 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:41:10 crc kubenswrapper[4741]: E0226 08:41:10.789488 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.134342 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.496027 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.503170 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550471 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550540 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550593 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550629 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550936 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.550993 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.551045 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.551072 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw7p2\" (UniqueName: \"kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.551146 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.551221 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrr5q\" (UniqueName: \"kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q\") pod \"3bc55121-fbed-47b1-93b8-ffe02186eceb\" (UID: \"3bc55121-fbed-47b1-93b8-ffe02186eceb\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.551269 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.575618 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2" (OuterVolumeSpecName: "kube-api-access-cw7p2") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "kube-api-access-cw7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.580703 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" event={"ID":"cdd17750-029b-4a96-84ea-8e577dd288c2","Type":"ContainerDied","Data":"53842381f23dd70e7a0a0f729b3e7cfb9ba469451280e288f5a1f4766bcf3168"} Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.580775 4741 scope.go:117] "RemoveContainer" containerID="f4c64e55bc734f2d32bf69d64ee4ed0cc533b9faffc4491fc3d3baf64f135790" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.580953 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-tv7bp" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.583158 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q" (OuterVolumeSpecName: "kube-api-access-jrr5q") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "kube-api-access-jrr5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.589242 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerStarted","Data":"a092e8d40078253d26b7fe6b430bc0d7061f0722f9b3d66c2908ea86c98ba0b5"} Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.592972 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" event={"ID":"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2","Type":"ContainerStarted","Data":"78cb00dd105fe3696106b8dbdd5cbf3017016f1389ead9fd56b27ed9112d129a"} Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.604499 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" event={"ID":"3bc55121-fbed-47b1-93b8-ffe02186eceb","Type":"ContainerDied","Data":"d02fa773b35b2b5f9f709962e8e28630fa4957766284028506473cae8ad25a4c"} Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.604554 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-s8b4c" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.657882 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw7p2\" (UniqueName: \"kubernetes.io/projected/cdd17750-029b-4a96-84ea-8e577dd288c2-kube-api-access-cw7p2\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.657928 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrr5q\" (UniqueName: \"kubernetes.io/projected/3bc55121-fbed-47b1-93b8-ffe02186eceb-kube-api-access-jrr5q\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.740232 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config" (OuterVolumeSpecName: "config") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.742562 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.744023 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.749175 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.762881 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.762913 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.762923 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.762952 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.764757 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config" (OuterVolumeSpecName: "config") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.766132 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.769597 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.781888 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3bc55121-fbed-47b1-93b8-ffe02186eceb" (UID: "3bc55121-fbed-47b1-93b8-ffe02186eceb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: E0226 08:41:11.784600 4741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0 podName:cdd17750-029b-4a96-84ea-8e577dd288c2 nodeName:}" failed. No retries permitted until 2026-02-26 08:41:12.284555204 +0000 UTC m=+1707.280492591 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2") : error deleting /var/lib/kubelet/pods/cdd17750-029b-4a96-84ea-8e577dd288c2/volume-subpaths: remove /var/lib/kubelet/pods/cdd17750-029b-4a96-84ea-8e577dd288c2/volume-subpaths: no such file or directory Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.784825 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.867493 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.867532 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.867543 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.867563 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.867574 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc55121-fbed-47b1-93b8-ffe02186eceb-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.966837 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.986872 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-s8b4c"] Feb 26 08:41:11 crc kubenswrapper[4741]: I0226 08:41:11.992334 4741 scope.go:117] "RemoveContainer" containerID="00fd4fe6ba0611e7e15cf3dda57a77f74885453279eefb2563b8d31c7ec954c0" Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.285533 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") pod \"cdd17750-029b-4a96-84ea-8e577dd288c2\" (UID: \"cdd17750-029b-4a96-84ea-8e577dd288c2\") " Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.285994 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cdd17750-029b-4a96-84ea-8e577dd288c2" (UID: "cdd17750-029b-4a96-84ea-8e577dd288c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.287057 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd17750-029b-4a96-84ea-8e577dd288c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.355809 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.613685 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.650799 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-tv7bp"] Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.685286 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d6b948c9c-pm7qf" event={"ID":"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd","Type":"ContainerStarted","Data":"80623cec6a243e513b10504a57915a63052ceb816a644c3c5a8c1943bda1e8dc"} Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.702140 4741 generic.go:334] "Generic (PLEG): container finished" podID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerID="6609cb7cf633f5daf733df4282c1d5f5966a329417f2fc3454374dec130e926a" exitCode=0 Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.702263 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" event={"ID":"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2","Type":"ContainerDied","Data":"6609cb7cf633f5daf733df4282c1d5f5966a329417f2fc3454374dec130e926a"} Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.783589 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerStarted","Data":"678a131ee86705e16d118c95dc52468378f54397f39100e022a13a61cea8fc22"} Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.785352 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:12 crc kubenswrapper[4741]: I0226 08:41:12.841544 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-864786857b-lmwmt" podStartSLOduration=8.838078639999999 podStartE2EDuration="8.83807864s" podCreationTimestamp="2026-02-26 08:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:12.815017574 +0000 UTC m=+1707.810954981" watchObservedRunningTime="2026-02-26 08:41:12.83807864 +0000 UTC m=+1707.834016027" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.292356 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.376603 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.551163 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.886390 4741 generic.go:334] "Generic (PLEG): container finished" podID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerID="9941961b65e63fc70e1e3738177edfe9571f1ed1b7322139645f4a9845b664bd" exitCode=0 Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.916356 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc55121-fbed-47b1-93b8-ffe02186eceb" path="/var/lib/kubelet/pods/3bc55121-fbed-47b1-93b8-ffe02186eceb/volumes" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.917461 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd17750-029b-4a96-84ea-8e577dd288c2" path="/var/lib/kubelet/pods/cdd17750-029b-4a96-84ea-8e577dd288c2/volumes" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918484 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" event={"ID":"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b","Type":"ContainerStarted","Data":"d6d9a219d8f4f269e1b9a9b921ed1017161d229444aa19b7cedb5ebbecb6624c"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918565 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" event={"ID":"8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b","Type":"ContainerStarted","Data":"03787b509475d9168c5e414ba9fee1e153d8c9245c5bb44f73ce09c698ecd195"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918586 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerStarted","Data":"ae7a971b4ef8f0f060397d684b322f96ee1232256011827233a62529971028c3"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918605 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerStarted","Data":"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918620 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerDied","Data":"9941961b65e63fc70e1e3738177edfe9571f1ed1b7322139645f4a9845b664bd"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.918642 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768d8b48ff-xfq8n" event={"ID":"4410803b-98d4-4e00-a854-4427dd5d3ebc","Type":"ContainerStarted","Data":"046df49f3728c871d959cd506a1e3b8fd90075db075268c3d26bbb6e725254ed"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.941992 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d6b948c9c-pm7qf" event={"ID":"ffb31ef3-acf3-4fc6-83a4-2a898da5dffd","Type":"ContainerStarted","Data":"1df138b051665b5af466a52dc3bcffe9390476815e3a6b75cb2bebf59c1378df"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.942162 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.942003 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c4c8dc778-mb68n" podStartSLOduration=6.506907666 podStartE2EDuration="9.941976228s" podCreationTimestamp="2026-02-26 08:41:04 +0000 UTC" firstStartedPulling="2026-02-26 08:41:07.97452563 +0000 UTC m=+1702.970463017" lastFinishedPulling="2026-02-26 08:41:11.409594192 +0000 UTC m=+1706.405531579" observedRunningTime="2026-02-26 08:41:13.878687968 +0000 UTC m=+1708.874625355" watchObservedRunningTime="2026-02-26 08:41:13.941976228 +0000 UTC m=+1708.937913615" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.960020 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" event={"ID":"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2","Type":"ContainerStarted","Data":"0efab2efc81d97c047ef4217bb3aa5bcdc0dd56b46133d9909f14eb8b5e911a5"} Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.960653 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:13 crc kubenswrapper[4741]: I0226 08:41:13.991393 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:13 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:13 crc kubenswrapper[4741]: > Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.013544 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-768d8b48ff-xfq8n" podStartSLOduration=6.582478754 podStartE2EDuration="10.013515132s" podCreationTimestamp="2026-02-26 08:41:04 +0000 UTC" firstStartedPulling="2026-02-26 08:41:08.49784756 +0000 UTC m=+1703.493784947" lastFinishedPulling="2026-02-26 08:41:11.928883938 +0000 UTC m=+1706.924821325" observedRunningTime="2026-02-26 08:41:13.934195216 +0000 UTC m=+1708.930132603" watchObservedRunningTime="2026-02-26 08:41:14.013515132 +0000 UTC m=+1709.009452519" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.026876 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d6b948c9c-pm7qf" podStartSLOduration=8.026841781 podStartE2EDuration="8.026841781s" podCreationTimestamp="2026-02-26 08:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:13.994177212 +0000 UTC m=+1708.990114609" watchObservedRunningTime="2026-02-26 08:41:14.026841781 +0000 UTC m=+1709.022779188" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.030197 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.057815 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" podStartSLOduration=7.057787781 podStartE2EDuration="7.057787781s" podCreationTimestamp="2026-02-26 08:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:14.015871229 +0000 UTC m=+1709.011808616" watchObservedRunningTime="2026-02-26 08:41:14.057787781 +0000 UTC m=+1709.053725168" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224763 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hv7t\" (UniqueName: \"kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224841 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224864 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224893 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224943 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.224996 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.225012 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data\") pod \"bbaee063-eb59-4c8e-b482-de4efc08084a\" (UID: \"bbaee063-eb59-4c8e-b482-de4efc08084a\") " Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.240572 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.241256 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.257858 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t" (OuterVolumeSpecName: "kube-api-access-7hv7t") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "kube-api-access-7hv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.314325 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts" (OuterVolumeSpecName: "scripts") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.334438 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.334471 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.334482 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hv7t\" (UniqueName: \"kubernetes.io/projected/bbaee063-eb59-4c8e-b482-de4efc08084a-kube-api-access-7hv7t\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.334492 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaee063-eb59-4c8e-b482-de4efc08084a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.420291 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.439240 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.544277 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.577307 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data" (OuterVolumeSpecName: "config-data") pod "bbaee063-eb59-4c8e-b482-de4efc08084a" (UID: "bbaee063-eb59-4c8e-b482-de4efc08084a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.647036 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.647067 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaee063-eb59-4c8e-b482-de4efc08084a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.973522 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerStarted","Data":"30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15"} Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.973755 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api-log" containerID="cri-o://ae7a971b4ef8f0f060397d684b322f96ee1232256011827233a62529971028c3" gracePeriod=30 Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.974740 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.974816 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api" containerID="cri-o://30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15" gracePeriod=30 Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.982255 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaee063-eb59-4c8e-b482-de4efc08084a","Type":"ContainerDied","Data":"1d35289f01351770de694e3aab376618edae86aeb7a2a8314511811262ed52c2"} Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.982328 4741 scope.go:117] "RemoveContainer" containerID="1e841647653f52aeb9b96363827078ce4640c502697c9752039404552f89c88f" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.982567 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.992197 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ldrfn" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" containerID="cri-o://fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720" gracePeriod=2 Feb 26 08:41:14 crc kubenswrapper[4741]: I0226 08:41:14.992250 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768d8b48ff-xfq8n" event={"ID":"4410803b-98d4-4e00-a854-4427dd5d3ebc","Type":"ContainerStarted","Data":"f6736c234cdf51feabbc39d560b92577b5dc0b563afc9475baeebb0b4cb2e6f0"} Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.026200 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.026174376 podStartE2EDuration="7.026174376s" podCreationTimestamp="2026-02-26 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:15.00275327 +0000 UTC m=+1709.998690657" watchObservedRunningTime="2026-02-26 08:41:15.026174376 +0000 UTC m=+1710.022111763" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.062062 4741 scope.go:117] "RemoveContainer" containerID="0fd7044efa0b54602e120e1b9f5c1ca6c87b2f28a517526d58d38e35b74bcbc0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.118314 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.129679 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.145827 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146594 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd17750-029b-4a96-84ea-8e577dd288c2" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146611 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd17750-029b-4a96-84ea-8e577dd288c2" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146650 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="proxy-httpd" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146656 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="proxy-httpd" Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146671 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc55121-fbed-47b1-93b8-ffe02186eceb" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146677 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc55121-fbed-47b1-93b8-ffe02186eceb" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146707 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="ceilometer-notification-agent" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146713 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="ceilometer-notification-agent" Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146725 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="sg-core" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146737 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="sg-core" Feb 26 08:41:15 crc kubenswrapper[4741]: E0226 08:41:15.146761 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c695186e-c672-45a4-933d-7f1546c18090" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.146770 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c695186e-c672-45a4-933d-7f1546c18090" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147023 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="proxy-httpd" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147038 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc55121-fbed-47b1-93b8-ffe02186eceb" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147055 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd17750-029b-4a96-84ea-8e577dd288c2" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147064 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c695186e-c672-45a4-933d-7f1546c18090" containerName="init" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147081 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="sg-core" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.147091 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" containerName="ceilometer-notification-agent" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.158385 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.162285 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.162569 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.169539 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.175537 4741 scope.go:117] "RemoveContainer" containerID="9941961b65e63fc70e1e3738177edfe9571f1ed1b7322139645f4a9845b664bd" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.264934 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265031 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265170 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265318 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265384 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqcc\" (UniqueName: \"kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265417 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.265443 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368431 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368615 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368668 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqcc\" (UniqueName: \"kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368700 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368731 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368792 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.368858 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.369529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.369525 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.379197 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.380446 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.381680 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.384335 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.391277 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqcc\" (UniqueName: \"kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc\") pod \"ceilometer-0\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.585004 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.820779 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b8557bd7d-cnk2f"] Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.823542 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.853148 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.853701 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.856391 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbaee063-eb59-4c8e-b482-de4efc08084a" path="/var/lib/kubelet/pods/bbaee063-eb59-4c8e-b482-de4efc08084a/volumes" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.858765 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8557bd7d-cnk2f"] Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.889751 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e8d427c-00a8-4c0f-acee-63d42390501d-logs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.889877 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-internal-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.889948 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data-custom\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.889987 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-combined-ca-bundle\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.890059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-public-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.890118 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4fq\" (UniqueName: \"kubernetes.io/projected/5e8d427c-00a8-4c0f-acee-63d42390501d-kube-api-access-qc4fq\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.890153 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:15 crc kubenswrapper[4741]: I0226 08:41:15.949905 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.001870 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e8d427c-00a8-4c0f-acee-63d42390501d-logs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.002099 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-internal-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.004492 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data-custom\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.004705 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-combined-ca-bundle\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.004788 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e8d427c-00a8-4c0f-acee-63d42390501d-logs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.004961 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-public-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.005085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4fq\" (UniqueName: \"kubernetes.io/projected/5e8d427c-00a8-4c0f-acee-63d42390501d-kube-api-access-qc4fq\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.005211 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.009394 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data-custom\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.035413 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-internal-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.045733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-public-tls-certs\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.046789 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-config-data\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.052855 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8d427c-00a8-4c0f-acee-63d42390501d-combined-ca-bundle\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.067555 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerStarted","Data":"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f"} Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.069737 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4fq\" (UniqueName: \"kubernetes.io/projected/5e8d427c-00a8-4c0f-acee-63d42390501d-kube-api-access-qc4fq\") pod \"barbican-api-5b8557bd7d-cnk2f\" (UID: \"5e8d427c-00a8-4c0f-acee-63d42390501d\") " pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.079517 4741 generic.go:334] "Generic (PLEG): container finished" podID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerID="fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720" exitCode=0 Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.079943 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerDied","Data":"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720"} Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.080082 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldrfn" event={"ID":"f37f72b3-ec4f-4fb8-b730-d7850bbbb964","Type":"ContainerDied","Data":"c3aed75e7c0cec3b28b785f791bbe4b6026364c2779d727d053826e362f3028d"} Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.080248 4741 scope.go:117] "RemoveContainer" containerID="fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.080255 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldrfn" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.084181 4741 generic.go:334] "Generic (PLEG): container finished" podID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerID="ae7a971b4ef8f0f060397d684b322f96ee1232256011827233a62529971028c3" exitCode=143 Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.084323 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerDied","Data":"ae7a971b4ef8f0f060397d684b322f96ee1232256011827233a62529971028c3"} Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.106643 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.351147347 podStartE2EDuration="9.106617327s" podCreationTimestamp="2026-02-26 08:41:07 +0000 UTC" firstStartedPulling="2026-02-26 08:41:09.528453163 +0000 UTC m=+1704.524390550" lastFinishedPulling="2026-02-26 08:41:12.283923143 +0000 UTC m=+1707.279860530" observedRunningTime="2026-02-26 08:41:16.098936028 +0000 UTC m=+1711.094873425" watchObservedRunningTime="2026-02-26 08:41:16.106617327 +0000 UTC m=+1711.102554714" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.130803 4741 scope.go:117] "RemoveContainer" containerID="2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.137681 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities\") pod \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.138248 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxn4\" (UniqueName: \"kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4\") pod \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.138381 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content\") pod \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\" (UID: \"f37f72b3-ec4f-4fb8-b730-d7850bbbb964\") " Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.140082 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities" (OuterVolumeSpecName: "utilities") pod "f37f72b3-ec4f-4fb8-b730-d7850bbbb964" (UID: "f37f72b3-ec4f-4fb8-b730-d7850bbbb964"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.148417 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4" (OuterVolumeSpecName: "kube-api-access-njxn4") pod "f37f72b3-ec4f-4fb8-b730-d7850bbbb964" (UID: "f37f72b3-ec4f-4fb8-b730-d7850bbbb964"). InnerVolumeSpecName "kube-api-access-njxn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.194659 4741 scope.go:117] "RemoveContainer" containerID="f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.195986 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f37f72b3-ec4f-4fb8-b730-d7850bbbb964" (UID: "f37f72b3-ec4f-4fb8-b730-d7850bbbb964"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.211412 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.240885 4741 scope.go:117] "RemoveContainer" containerID="fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.243370 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.243489 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njxn4\" (UniqueName: \"kubernetes.io/projected/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-kube-api-access-njxn4\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.243543 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37f72b3-ec4f-4fb8-b730-d7850bbbb964-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:16 crc kubenswrapper[4741]: E0226 08:41:16.245033 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720\": container with ID starting with fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720 not found: ID does not exist" containerID="fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.245075 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720"} err="failed to get container status \"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720\": rpc error: code = NotFound desc = could not find container \"fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720\": container with ID starting with fac3d5c984cbfd50d3029e45db3bee7f108a7a3732127f2b15df525e51eb0720 not found: ID does not exist" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.245103 4741 scope.go:117] "RemoveContainer" containerID="2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72" Feb 26 08:41:16 crc kubenswrapper[4741]: E0226 08:41:16.252961 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72\": container with ID starting with 2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72 not found: ID does not exist" containerID="2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.253017 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72"} err="failed to get container status \"2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72\": rpc error: code = NotFound desc = could not find container \"2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72\": container with ID starting with 2afee7cd2f0f477bb8968a79871b74438168525e59e00c39c6c6f454ec8d0b72 not found: ID does not exist" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.253049 4741 scope.go:117] "RemoveContainer" containerID="f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568" Feb 26 08:41:16 crc kubenswrapper[4741]: E0226 08:41:16.256786 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568\": container with ID starting with f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568 not found: ID does not exist" containerID="f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.256998 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568"} err="failed to get container status \"f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568\": rpc error: code = NotFound desc = could not find container \"f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568\": container with ID starting with f8529f5892d6ee5c9c46614548ddc4d76ecb447bb038fec2434aec6a92bd2568 not found: ID does not exist" Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.532693 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.589335 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldrfn"] Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.624923 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:16 crc kubenswrapper[4741]: W0226 08:41:16.946787 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e8d427c_00a8_4c0f_acee_63d42390501d.slice/crio-83ff00de45761bf1960524beefcd58792f7689bf1617a27a79ec7f317f87ee23 WatchSource:0}: Error finding container 83ff00de45761bf1960524beefcd58792f7689bf1617a27a79ec7f317f87ee23: Status 404 returned error can't find the container with id 83ff00de45761bf1960524beefcd58792f7689bf1617a27a79ec7f317f87ee23 Feb 26 08:41:16 crc kubenswrapper[4741]: I0226 08:41:16.946905 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8557bd7d-cnk2f"] Feb 26 08:41:17 crc kubenswrapper[4741]: I0226 08:41:17.132514 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8557bd7d-cnk2f" event={"ID":"5e8d427c-00a8-4c0f-acee-63d42390501d","Type":"ContainerStarted","Data":"83ff00de45761bf1960524beefcd58792f7689bf1617a27a79ec7f317f87ee23"} Feb 26 08:41:17 crc kubenswrapper[4741]: I0226 08:41:17.146401 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerStarted","Data":"97301faae73e4e48c0a8b1c166f1629fc7ebb9a1655b0f52f0c10cb48fa9b2b1"} Feb 26 08:41:17 crc kubenswrapper[4741]: I0226 08:41:17.801269 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" path="/var/lib/kubelet/pods/f37f72b3-ec4f-4fb8-b730-d7850bbbb964/volumes" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.143201 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.157995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerStarted","Data":"9d4dea437c1b85180e47e2056b6a42189ebf5b0d2224c5fb7af7f15316493193"} Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.159788 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8557bd7d-cnk2f" event={"ID":"5e8d427c-00a8-4c0f-acee-63d42390501d","Type":"ContainerStarted","Data":"99229e64a360aa99364cedf0049a5f5f44edfb373e5ebaad7470b629712069f7"} Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.159822 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8557bd7d-cnk2f" event={"ID":"5e8d427c-00a8-4c0f-acee-63d42390501d","Type":"ContainerStarted","Data":"bd1ded24e86b9788692d6efd914587229a47565bca9de20476126a173915e961"} Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.161154 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.161180 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.206550 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b8557bd7d-cnk2f" podStartSLOduration=3.206521555 podStartE2EDuration="3.206521555s" podCreationTimestamp="2026-02-26 08:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:18.193269518 +0000 UTC m=+1713.189206905" watchObservedRunningTime="2026-02-26 08:41:18.206521555 +0000 UTC m=+1713.202458942" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.363067 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.432341 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.733329 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.808302 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.900119 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:41:18 crc kubenswrapper[4741]: I0226 08:41:18.900410 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="dnsmasq-dns" containerID="cri-o://53b5b159428f0c7df8c7063e26d98a629a0fb8abe18e390cf84bb18dbb3c04bf" gracePeriod=10 Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.199156 4741 generic.go:334] "Generic (PLEG): container finished" podID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerID="53b5b159428f0c7df8c7063e26d98a629a0fb8abe18e390cf84bb18dbb3c04bf" exitCode=0 Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.199276 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" event={"ID":"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697","Type":"ContainerDied","Data":"53b5b159428f0c7df8c7063e26d98a629a0fb8abe18e390cf84bb18dbb3c04bf"} Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.205014 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerStarted","Data":"fec08c0201e830fb17fe5aeff4824cb6bdca36b363be7978b573aa2e76b523a2"} Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.666507 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.815517 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.815765 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mldqn\" (UniqueName: \"kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.816464 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.816518 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.816557 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.829421 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn" (OuterVolumeSpecName: "kube-api-access-mldqn") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "kube-api-access-mldqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.921763 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config" (OuterVolumeSpecName: "config") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.921851 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.922311 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.922850 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") pod \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\" (UID: \"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697\") " Feb 26 08:41:19 crc kubenswrapper[4741]: W0226 08:41:19.923345 4741 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.923362 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: W0226 08:41:19.923454 4741 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697/volumes/kubernetes.io~configmap/config Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.923478 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config" (OuterVolumeSpecName: "config") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.923710 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.923727 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.923739 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mldqn\" (UniqueName: \"kubernetes.io/projected/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-kube-api-access-mldqn\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.940011 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:19 crc kubenswrapper[4741]: I0226 08:41:19.945690 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" (UID: "bcc07bb8-7a1a-4e92-a36f-57cdb8e10697"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.026715 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.026753 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.220713 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" event={"ID":"bcc07bb8-7a1a-4e92-a36f-57cdb8e10697","Type":"ContainerDied","Data":"8b451f9f3d255e7522224a923d988eb3b0bfbe3b9df910d89288c2a763363975"} Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.220783 4741 scope.go:117] "RemoveContainer" containerID="53b5b159428f0c7df8c7063e26d98a629a0fb8abe18e390cf84bb18dbb3c04bf" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.220966 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-ddszv" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.230479 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerStarted","Data":"0918d0e1535416598af3f4974fa83a1fcbe6e7153046c310a86f30cddd3e4dc6"} Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.261226 4741 scope.go:117] "RemoveContainer" containerID="bd1a1dae9d89920e94d643d2b52c1aa6549da38ee45cd5b8fbf9c0a7eabb476b" Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.284151 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:41:20 crc kubenswrapper[4741]: I0226 08:41:20.303255 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-ddszv"] Feb 26 08:41:21 crc kubenswrapper[4741]: I0226 08:41:21.247693 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerStarted","Data":"2d07c50936eadaad6b722298b5d5c96043c84b128f815a9105b9b489e9ad1536"} Feb 26 08:41:21 crc kubenswrapper[4741]: I0226 08:41:21.248487 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:41:21 crc kubenswrapper[4741]: I0226 08:41:21.272758 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.062739143 podStartE2EDuration="6.272736169s" podCreationTimestamp="2026-02-26 08:41:15 +0000 UTC" firstStartedPulling="2026-02-26 08:41:16.627072265 +0000 UTC m=+1711.623009652" lastFinishedPulling="2026-02-26 08:41:20.837069291 +0000 UTC m=+1715.833006678" observedRunningTime="2026-02-26 08:41:21.270326601 +0000 UTC m=+1716.266263988" watchObservedRunningTime="2026-02-26 08:41:21.272736169 +0000 UTC m=+1716.268673556" Feb 26 08:41:21 crc kubenswrapper[4741]: I0226 08:41:21.802645 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:41:21 crc kubenswrapper[4741]: E0226 08:41:21.804022 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:41:21 crc kubenswrapper[4741]: I0226 08:41:21.841145 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" path="/var/lib/kubelet/pods/bcc07bb8-7a1a-4e92-a36f-57cdb8e10697/volumes" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.096196 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.387926 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.774343 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7566dbcd8b-6qsnk"] Feb 26 08:41:22 crc kubenswrapper[4741]: E0226 08:41:22.775337 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="extract-content" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775357 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="extract-content" Feb 26 08:41:22 crc kubenswrapper[4741]: E0226 08:41:22.775375 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="extract-utilities" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775382 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="extract-utilities" Feb 26 08:41:22 crc kubenswrapper[4741]: E0226 08:41:22.775392 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="dnsmasq-dns" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775399 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="dnsmasq-dns" Feb 26 08:41:22 crc kubenswrapper[4741]: E0226 08:41:22.775425 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775431 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" Feb 26 08:41:22 crc kubenswrapper[4741]: E0226 08:41:22.775468 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="init" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775474 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="init" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775688 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37f72b3-ec4f-4fb8-b730-d7850bbbb964" containerName="registry-server" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.775723 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc07bb8-7a1a-4e92-a36f-57cdb8e10697" containerName="dnsmasq-dns" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.777170 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.817366 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7566dbcd8b-6qsnk"] Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839120 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-internal-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839182 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-config-data\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839223 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkqj\" (UniqueName: \"kubernetes.io/projected/f5c8055f-a0fc-411f-9379-7079ee6d51b4-kube-api-access-9dkqj\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839257 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-combined-ca-bundle\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839285 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-public-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839317 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-scripts\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.839466 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8055f-a0fc-411f-9379-7079ee6d51b4-logs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.942864 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkqj\" (UniqueName: \"kubernetes.io/projected/f5c8055f-a0fc-411f-9379-7079ee6d51b4-kube-api-access-9dkqj\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.942961 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-combined-ca-bundle\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.942990 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-public-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.943019 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-scripts\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.943140 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8055f-a0fc-411f-9379-7079ee6d51b4-logs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.943219 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-internal-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.943237 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-config-data\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.947380 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8055f-a0fc-411f-9379-7079ee6d51b4-logs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.956631 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-config-data\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.957475 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-scripts\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:22 crc kubenswrapper[4741]: I0226 08:41:22.963808 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-internal-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.020644 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-combined-ca-bundle\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.035738 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c8055f-a0fc-411f-9379-7079ee6d51b4-public-tls-certs\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.054418 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkqj\" (UniqueName: \"kubernetes.io/projected/f5c8055f-a0fc-411f-9379-7079ee6d51b4-kube-api-access-9dkqj\") pod \"placement-7566dbcd8b-6qsnk\" (UID: \"f5c8055f-a0fc-411f-9379-7079ee6d51b4\") " pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.106975 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.398588 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.483216 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:23 crc kubenswrapper[4741]: W0226 08:41:23.885915 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c8055f_a0fc_411f_9379_7079ee6d51b4.slice/crio-f7d5bf24e17c11f31183154a6fbfc162ae026698adc3e3f25097f48ccfa1132b WatchSource:0}: Error finding container f7d5bf24e17c11f31183154a6fbfc162ae026698adc3e3f25097f48ccfa1132b: Status 404 returned error can't find the container with id f7d5bf24e17c11f31183154a6fbfc162ae026698adc3e3f25097f48ccfa1132b Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.889896 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7566dbcd8b-6qsnk"] Feb 26 08:41:23 crc kubenswrapper[4741]: I0226 08:41:23.976035 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:23 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:23 crc kubenswrapper[4741]: > Feb 26 08:41:24 crc kubenswrapper[4741]: I0226 08:41:24.143406 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.216:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:41:24 crc kubenswrapper[4741]: I0226 08:41:24.332564 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="cinder-scheduler" containerID="cri-o://0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f" gracePeriod=30 Feb 26 08:41:24 crc kubenswrapper[4741]: I0226 08:41:24.332955 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7566dbcd8b-6qsnk" event={"ID":"f5c8055f-a0fc-411f-9379-7079ee6d51b4","Type":"ContainerStarted","Data":"3b46c80cdf3224393126458bf174f21e66bdd59cabcf8802e7d1c714dc7510cd"} Feb 26 08:41:24 crc kubenswrapper[4741]: I0226 08:41:24.332994 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7566dbcd8b-6qsnk" event={"ID":"f5c8055f-a0fc-411f-9379-7079ee6d51b4","Type":"ContainerStarted","Data":"f7d5bf24e17c11f31183154a6fbfc162ae026698adc3e3f25097f48ccfa1132b"} Feb 26 08:41:24 crc kubenswrapper[4741]: I0226 08:41:24.333457 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="probe" containerID="cri-o://63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f" gracePeriod=30 Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.348584 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7566dbcd8b-6qsnk" event={"ID":"f5c8055f-a0fc-411f-9379-7079ee6d51b4","Type":"ContainerStarted","Data":"09b00701ca2f3758a0706b9ee32ecebc51849028df63d7d9cc3c9c5ce10498ce"} Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.349786 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.349802 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.352198 4741 generic.go:334] "Generic (PLEG): container finished" podID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerID="63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f" exitCode=0 Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.352245 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerDied","Data":"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f"} Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.391617 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7566dbcd8b-6qsnk" podStartSLOduration=3.391584404 podStartE2EDuration="3.391584404s" podCreationTimestamp="2026-02-26 08:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:25.374648333 +0000 UTC m=+1720.370585730" watchObservedRunningTime="2026-02-26 08:41:25.391584404 +0000 UTC m=+1720.387521781" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.907257 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957079 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957492 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957690 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957719 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfq2f\" (UniqueName: \"kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957854 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.957898 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data\") pod \"e353365b-5d76-48b5-b500-6aa0ced0a15d\" (UID: \"e353365b-5d76-48b5-b500-6aa0ced0a15d\") " Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.958186 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.960751 4741 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e353365b-5d76-48b5-b500-6aa0ced0a15d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.963830 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts" (OuterVolumeSpecName: "scripts") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.989483 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f" (OuterVolumeSpecName: "kube-api-access-sfq2f") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "kube-api-access-sfq2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:25 crc kubenswrapper[4741]: I0226 08:41:25.989507 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.063965 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfq2f\" (UniqueName: \"kubernetes.io/projected/e353365b-5d76-48b5-b500-6aa0ced0a15d-kube-api-access-sfq2f\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.064000 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.064011 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.112257 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.148465 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data" (OuterVolumeSpecName: "config-data") pod "e353365b-5d76-48b5-b500-6aa0ced0a15d" (UID: "e353365b-5d76-48b5-b500-6aa0ced0a15d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.166616 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.167068 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e353365b-5d76-48b5-b500-6aa0ced0a15d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.375742 4741 generic.go:334] "Generic (PLEG): container finished" podID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerID="0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f" exitCode=0 Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.375851 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.375876 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerDied","Data":"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f"} Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.375976 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e353365b-5d76-48b5-b500-6aa0ced0a15d","Type":"ContainerDied","Data":"4ffead3a0d9e126e69b6a27f761fd3b70609ce00ac96ac761d7713b49f14e2fd"} Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.376008 4741 scope.go:117] "RemoveContainer" containerID="63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.389865 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cb84cbfff-vmnwr" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.482784 4741 scope.go:117] "RemoveContainer" containerID="0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.510618 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.534213 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.547970 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:26 crc kubenswrapper[4741]: E0226 08:41:26.572949 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="cinder-scheduler" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.572990 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="cinder-scheduler" Feb 26 08:41:26 crc kubenswrapper[4741]: E0226 08:41:26.573024 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="probe" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.573032 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="probe" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.573416 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="cinder-scheduler" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.573429 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" containerName="probe" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.574800 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.574916 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.582220 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.627302 4741 scope.go:117] "RemoveContainer" containerID="63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f" Feb 26 08:41:26 crc kubenswrapper[4741]: E0226 08:41:26.627890 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f\": container with ID starting with 63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f not found: ID does not exist" containerID="63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.627921 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f"} err="failed to get container status \"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f\": rpc error: code = NotFound desc = could not find container \"63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f\": container with ID starting with 63ea572ec81267afd64ed24fef89a54206c7000a7bdbb8c747923e5f5f2d206f not found: ID does not exist" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.627947 4741 scope.go:117] "RemoveContainer" containerID="0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f" Feb 26 08:41:26 crc kubenswrapper[4741]: E0226 08:41:26.628172 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f\": container with ID starting with 0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f not found: ID does not exist" containerID="0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.628189 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f"} err="failed to get container status \"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f\": rpc error: code = NotFound desc = could not find container \"0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f\": container with ID starting with 0210ed8de6d209ae66fd60843acc5d726f8260b97158f998b333e90df45d937f not found: ID does not exist" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.690095 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/035e58f7-7a11-4584-baee-a4036a07b94b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.691691 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjw2z\" (UniqueName: \"kubernetes.io/projected/035e58f7-7a11-4584-baee-a4036a07b94b-kube-api-access-fjw2z\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.691795 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.691983 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.692043 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-scripts\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.693013 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.795722 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.796086 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-scripts\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.796231 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.796448 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/035e58f7-7a11-4584-baee-a4036a07b94b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.796596 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjw2z\" (UniqueName: \"kubernetes.io/projected/035e58f7-7a11-4584-baee-a4036a07b94b-kube-api-access-fjw2z\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.796679 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.801861 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.803777 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/035e58f7-7a11-4584-baee-a4036a07b94b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.809944 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.812773 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.813910 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/035e58f7-7a11-4584-baee-a4036a07b94b-scripts\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.833453 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjw2z\" (UniqueName: \"kubernetes.io/projected/035e58f7-7a11-4584-baee-a4036a07b94b-kube-api-access-fjw2z\") pod \"cinder-scheduler-0\" (UID: \"035e58f7-7a11-4584-baee-a4036a07b94b\") " pod="openstack/cinder-scheduler-0" Feb 26 08:41:26 crc kubenswrapper[4741]: I0226 08:41:26.979663 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 08:41:27 crc kubenswrapper[4741]: I0226 08:41:27.201825 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 08:41:27 crc kubenswrapper[4741]: I0226 08:41:27.741334 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 08:41:27 crc kubenswrapper[4741]: I0226 08:41:27.836875 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e353365b-5d76-48b5-b500-6aa0ced0a15d" path="/var/lib/kubelet/pods/e353365b-5d76-48b5-b500-6aa0ced0a15d/volumes" Feb 26 08:41:28 crc kubenswrapper[4741]: I0226 08:41:28.442246 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"035e58f7-7a11-4584-baee-a4036a07b94b","Type":"ContainerStarted","Data":"df0ba83fa2fb62c3afb728a9dd679b6d7e4b96c08f4bc991850c215384ae203f"} Feb 26 08:41:28 crc kubenswrapper[4741]: I0226 08:41:28.908225 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:29 crc kubenswrapper[4741]: I0226 08:41:29.284569 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8557bd7d-cnk2f" Feb 26 08:41:29 crc kubenswrapper[4741]: I0226 08:41:29.357939 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:29 crc kubenswrapper[4741]: I0226 08:41:29.358260 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api-log" containerID="cri-o://08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095" gracePeriod=30 Feb 26 08:41:29 crc kubenswrapper[4741]: I0226 08:41:29.358441 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api" containerID="cri-o://1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88" gracePeriod=30 Feb 26 08:41:29 crc kubenswrapper[4741]: I0226 08:41:29.482832 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"035e58f7-7a11-4584-baee-a4036a07b94b","Type":"ContainerStarted","Data":"6a21eaac4b5f8b95b400ae8f679cefad0273cc46abcb2f8c3b313cc90dd5c8f1"} Feb 26 08:41:30 crc kubenswrapper[4741]: I0226 08:41:30.496846 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"035e58f7-7a11-4584-baee-a4036a07b94b","Type":"ContainerStarted","Data":"a4a0430f8e248948246220bbca49f90084c701b312c347ed8037a27c8262378e"} Feb 26 08:41:30 crc kubenswrapper[4741]: I0226 08:41:30.499541 4741 generic.go:334] "Generic (PLEG): container finished" podID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerID="08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095" exitCode=143 Feb 26 08:41:30 crc kubenswrapper[4741]: I0226 08:41:30.499606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerDied","Data":"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095"} Feb 26 08:41:30 crc kubenswrapper[4741]: I0226 08:41:30.536578 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.536550396 podStartE2EDuration="4.536550396s" podCreationTimestamp="2026-02-26 08:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:30.524818673 +0000 UTC m=+1725.520756060" watchObservedRunningTime="2026-02-26 08:41:30.536550396 +0000 UTC m=+1725.532487783" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.385580 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.390320 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.393460 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.394590 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.398260 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mjqrl" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.411424 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.458598 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.458912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.459325 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc25t\" (UniqueName: \"kubernetes.io/projected/9884c9db-d963-4349-8cbb-a4a72a81d8cc-kube-api-access-gc25t\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.459469 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.562809 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc25t\" (UniqueName: \"kubernetes.io/projected/9884c9db-d963-4349-8cbb-a4a72a81d8cc-kube-api-access-gc25t\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.562890 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.564095 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.564312 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.565378 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.571457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.571832 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9884c9db-d963-4349-8cbb-a4a72a81d8cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.580793 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc25t\" (UniqueName: \"kubernetes.io/projected/9884c9db-d963-4349-8cbb-a4a72a81d8cc-kube-api-access-gc25t\") pod \"openstackclient\" (UID: \"9884c9db-d963-4349-8cbb-a4a72a81d8cc\") " pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.747486 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 08:41:31 crc kubenswrapper[4741]: I0226 08:41:31.985402 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 08:41:32 crc kubenswrapper[4741]: I0226 08:41:32.397905 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 08:41:32 crc kubenswrapper[4741]: I0226 08:41:32.529027 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9884c9db-d963-4349-8cbb-a4a72a81d8cc","Type":"ContainerStarted","Data":"f17bd296822558b5e670d82964e3d345c75eb004b8977df228db6835e7d76cbf"} Feb 26 08:41:32 crc kubenswrapper[4741]: I0226 08:41:32.550335 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": read tcp 10.217.0.2:48810->10.217.0.211:9311: read: connection reset by peer" Feb 26 08:41:32 crc kubenswrapper[4741]: I0226 08:41:32.550353 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": read tcp 10.217.0.2:48814->10.217.0.211:9311: read: connection reset by peer" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.337076 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.449377 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs\") pod \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.450038 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsfb7\" (UniqueName: \"kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7\") pod \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.450066 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle\") pod \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.450089 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs" (OuterVolumeSpecName: "logs") pod "2e50a17a-32c0-4f25-8335-ea9f21ee5382" (UID: "2e50a17a-32c0-4f25-8335-ea9f21ee5382"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.450201 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data\") pod \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.450233 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom\") pod \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\" (UID: \"2e50a17a-32c0-4f25-8335-ea9f21ee5382\") " Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.451264 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e50a17a-32c0-4f25-8335-ea9f21ee5382-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.458433 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7" (OuterVolumeSpecName: "kube-api-access-gsfb7") pod "2e50a17a-32c0-4f25-8335-ea9f21ee5382" (UID: "2e50a17a-32c0-4f25-8335-ea9f21ee5382"). InnerVolumeSpecName "kube-api-access-gsfb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.458811 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e50a17a-32c0-4f25-8335-ea9f21ee5382" (UID: "2e50a17a-32c0-4f25-8335-ea9f21ee5382"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.504994 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e50a17a-32c0-4f25-8335-ea9f21ee5382" (UID: "2e50a17a-32c0-4f25-8335-ea9f21ee5382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.551402 4741 generic.go:334] "Generic (PLEG): container finished" podID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerID="1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88" exitCode=0 Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.551483 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerDied","Data":"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88"} Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.551529 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" event={"ID":"2e50a17a-32c0-4f25-8335-ea9f21ee5382","Type":"ContainerDied","Data":"b1de0d3381b23f8e31d5cd1d2ff55d186f3d74707c54681016630c4961a3ef0f"} Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.551558 4741 scope.go:117] "RemoveContainer" containerID="1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.551910 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66bc8c7d7d-sd4nm" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.554393 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data" (OuterVolumeSpecName: "config-data") pod "2e50a17a-32c0-4f25-8335-ea9f21ee5382" (UID: "2e50a17a-32c0-4f25-8335-ea9f21ee5382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.555305 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsfb7\" (UniqueName: \"kubernetes.io/projected/2e50a17a-32c0-4f25-8335-ea9f21ee5382-kube-api-access-gsfb7\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.555337 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.555350 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.555366 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e50a17a-32c0-4f25-8335-ea9f21ee5382-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.607029 4741 scope.go:117] "RemoveContainer" containerID="08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.710507 4741 scope.go:117] "RemoveContainer" containerID="1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88" Feb 26 08:41:33 crc kubenswrapper[4741]: E0226 08:41:33.711770 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88\": container with ID starting with 1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88 not found: ID does not exist" containerID="1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.711833 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88"} err="failed to get container status \"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88\": rpc error: code = NotFound desc = could not find container \"1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88\": container with ID starting with 1919a6166e7b124281be3be1d58992df421bc3e65c5f4143d65d4df095621e88 not found: ID does not exist" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.711871 4741 scope.go:117] "RemoveContainer" containerID="08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095" Feb 26 08:41:33 crc kubenswrapper[4741]: E0226 08:41:33.712506 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095\": container with ID starting with 08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095 not found: ID does not exist" containerID="08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.712583 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095"} err="failed to get container status \"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095\": rpc error: code = NotFound desc = could not find container \"08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095\": container with ID starting with 08d099fc6323cd98e6675dc91277df48466891f579d69f205603c2fafebab095 not found: ID does not exist" Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.895490 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.907429 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-66bc8c7d7d-sd4nm"] Feb 26 08:41:33 crc kubenswrapper[4741]: I0226 08:41:33.980070 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:33 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:33 crc kubenswrapper[4741]: > Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.481528 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5684558457-bgfq2"] Feb 26 08:41:34 crc kubenswrapper[4741]: E0226 08:41:34.482619 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api-log" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.482635 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api-log" Feb 26 08:41:34 crc kubenswrapper[4741]: E0226 08:41:34.482676 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.482685 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.484662 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api-log" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.484696 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" containerName="barbican-api" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.486060 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.491288 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.496434 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.496625 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.499401 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-run-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.499496 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-combined-ca-bundle\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.499559 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxsvx\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-kube-api-access-wxsvx\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.499660 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-log-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.499824 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-public-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.500002 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-internal-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.500051 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-etc-swift\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.500141 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-config-data\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.532352 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.532866 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-central-agent" containerID="cri-o://9d4dea437c1b85180e47e2056b6a42189ebf5b0d2224c5fb7af7f15316493193" gracePeriod=30 Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.533526 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="proxy-httpd" containerID="cri-o://2d07c50936eadaad6b722298b5d5c96043c84b128f815a9105b9b489e9ad1536" gracePeriod=30 Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.533611 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="sg-core" containerID="cri-o://0918d0e1535416598af3f4974fa83a1fcbe6e7153046c310a86f30cddd3e4dc6" gracePeriod=30 Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.533675 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-notification-agent" containerID="cri-o://fec08c0201e830fb17fe5aeff4824cb6bdca36b363be7978b573aa2e76b523a2" gracePeriod=30 Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.562576 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.568237 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5684558457-bgfq2"] Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602622 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-combined-ca-bundle\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602670 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxsvx\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-kube-api-access-wxsvx\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602713 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-log-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602761 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-public-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602824 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-internal-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602849 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-etc-swift\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.602885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-config-data\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.603044 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-run-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.603680 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-run-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.606517 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36ad4e3b-dd1d-40e9-9051-203753e6be0b-log-httpd\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.608722 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-combined-ca-bundle\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.608803 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-etc-swift\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.612771 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-public-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.614324 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-internal-tls-certs\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.622538 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad4e3b-dd1d-40e9-9051-203753e6be0b-config-data\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.634954 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxsvx\" (UniqueName: \"kubernetes.io/projected/36ad4e3b-dd1d-40e9-9051-203753e6be0b-kube-api-access-wxsvx\") pod \"swift-proxy-5684558457-bgfq2\" (UID: \"36ad4e3b-dd1d-40e9-9051-203753e6be0b\") " pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:34 crc kubenswrapper[4741]: I0226 08:41:34.831312 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.423042 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620647 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerID="2d07c50936eadaad6b722298b5d5c96043c84b128f815a9105b9b489e9ad1536" exitCode=0 Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620686 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerID="0918d0e1535416598af3f4974fa83a1fcbe6e7153046c310a86f30cddd3e4dc6" exitCode=2 Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620696 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerID="fec08c0201e830fb17fe5aeff4824cb6bdca36b363be7978b573aa2e76b523a2" exitCode=0 Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620704 4741 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerID="9d4dea437c1b85180e47e2056b6a42189ebf5b0d2224c5fb7af7f15316493193" exitCode=0 Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620727 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerDied","Data":"2d07c50936eadaad6b722298b5d5c96043c84b128f815a9105b9b489e9ad1536"} Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620760 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerDied","Data":"0918d0e1535416598af3f4974fa83a1fcbe6e7153046c310a86f30cddd3e4dc6"} Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620769 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerDied","Data":"fec08c0201e830fb17fe5aeff4824cb6bdca36b363be7978b573aa2e76b523a2"} Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.620781 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerDied","Data":"9d4dea437c1b85180e47e2056b6a42189ebf5b0d2224c5fb7af7f15316493193"} Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.666695 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5684558457-bgfq2"] Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.816599 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:41:35 crc kubenswrapper[4741]: E0226 08:41:35.821209 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:41:35 crc kubenswrapper[4741]: I0226 08:41:35.831020 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e50a17a-32c0-4f25-8335-ea9f21ee5382" path="/var/lib/kubelet/pods/2e50a17a-32c0-4f25-8335-ea9f21ee5382/volumes" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.090181 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103479 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103572 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwqcc\" (UniqueName: \"kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103767 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103792 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103830 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103852 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.103900 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle\") pod \"d9cf8925-eed6-4b8b-981d-07cc64436623\" (UID: \"d9cf8925-eed6-4b8b-981d-07cc64436623\") " Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.105386 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.105756 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.113947 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc" (OuterVolumeSpecName: "kube-api-access-rwqcc") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "kube-api-access-rwqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.124748 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts" (OuterVolumeSpecName: "scripts") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.209911 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.209951 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.209961 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwqcc\" (UniqueName: \"kubernetes.io/projected/d9cf8925-eed6-4b8b-981d-07cc64436623-kube-api-access-rwqcc\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.209973 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8925-eed6-4b8b-981d-07cc64436623-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.210327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.310283 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.312060 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.312093 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.380361 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data" (OuterVolumeSpecName: "config-data") pod "d9cf8925-eed6-4b8b-981d-07cc64436623" (UID: "d9cf8925-eed6-4b8b-981d-07cc64436623"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.414674 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8925-eed6-4b8b-981d-07cc64436623-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.658424 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9cf8925-eed6-4b8b-981d-07cc64436623","Type":"ContainerDied","Data":"97301faae73e4e48c0a8b1c166f1629fc7ebb9a1655b0f52f0c10cb48fa9b2b1"} Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.658467 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.658517 4741 scope.go:117] "RemoveContainer" containerID="2d07c50936eadaad6b722298b5d5c96043c84b128f815a9105b9b489e9ad1536" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.666738 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5684558457-bgfq2" event={"ID":"36ad4e3b-dd1d-40e9-9051-203753e6be0b","Type":"ContainerStarted","Data":"3d3cf90cf2265a95576bc8160dd319e3f92f82428c79a621a9d238022333f504"} Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.666808 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5684558457-bgfq2" event={"ID":"36ad4e3b-dd1d-40e9-9051-203753e6be0b","Type":"ContainerStarted","Data":"61a44a05b3f8154f5e55acc48b73db967a4f2518bbc9c80ccb32f30e70709f10"} Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.666822 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5684558457-bgfq2" event={"ID":"36ad4e3b-dd1d-40e9-9051-203753e6be0b","Type":"ContainerStarted","Data":"5c4c272b0b345326ee8b7fb3c4729609e32bbd15ca93c634151a98f3dbac0da4"} Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.667065 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.667086 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.713003 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5684558457-bgfq2" podStartSLOduration=2.712978186 podStartE2EDuration="2.712978186s" podCreationTimestamp="2026-02-26 08:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:36.699436751 +0000 UTC m=+1731.695374138" watchObservedRunningTime="2026-02-26 08:41:36.712978186 +0000 UTC m=+1731.708915563" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.717142 4741 scope.go:117] "RemoveContainer" containerID="0918d0e1535416598af3f4974fa83a1fcbe6e7153046c310a86f30cddd3e4dc6" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.818014 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.825314 4741 scope.go:117] "RemoveContainer" containerID="fec08c0201e830fb17fe5aeff4824cb6bdca36b363be7978b573aa2e76b523a2" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.835813 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.856318 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:36 crc kubenswrapper[4741]: E0226 08:41:36.857071 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-notification-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857094 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-notification-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: E0226 08:41:36.857125 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="proxy-httpd" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857132 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="proxy-httpd" Feb 26 08:41:36 crc kubenswrapper[4741]: E0226 08:41:36.857156 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="sg-core" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857274 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="sg-core" Feb 26 08:41:36 crc kubenswrapper[4741]: E0226 08:41:36.857311 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-central-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857319 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-central-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857556 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-notification-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857575 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="ceilometer-central-agent" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857597 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="sg-core" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.857615 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" containerName="proxy-httpd" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.860234 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.868955 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.871931 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.872011 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.882433 4741 scope.go:117] "RemoveContainer" containerID="9d4dea437c1b85180e47e2056b6a42189ebf5b0d2224c5fb7af7f15316493193" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.934581 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7s9\" (UniqueName: \"kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.934843 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.935006 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.935150 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.935232 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.935373 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:36 crc kubenswrapper[4741]: I0226 08:41:36.935532 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.038751 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.039695 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7s9\" (UniqueName: \"kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.039931 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.040518 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.044430 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.044641 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.044689 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.044934 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.048146 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.051746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.053045 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.054626 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.061564 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.079350 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7s9\" (UniqueName: \"kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9\") pod \"ceilometer-0\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.183928 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.335804 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.437633 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d6b948c9c-pm7qf" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.572992 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.573685 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-864786857b-lmwmt" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-api" containerID="cri-o://05a28ce2fb11c6716fdfd3b5ec7204d0d1363f66b1c9de4094beaf75133c02ca" gracePeriod=30 Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.574884 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-864786857b-lmwmt" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-httpd" containerID="cri-o://678a131ee86705e16d118c95dc52468378f54397f39100e022a13a61cea8fc22" gracePeriod=30 Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.820550 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9cf8925-eed6-4b8b-981d-07cc64436623" path="/var/lib/kubelet/pods/d9cf8925-eed6-4b8b-981d-07cc64436623/volumes" Feb 26 08:41:37 crc kubenswrapper[4741]: I0226 08:41:37.873316 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:38 crc kubenswrapper[4741]: I0226 08:41:38.720935 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerStarted","Data":"1462f0ebc9cb2c73dc9fde02f652d0bb34b6b62ce5028768bc1380d0a56ca545"} Feb 26 08:41:38 crc kubenswrapper[4741]: I0226 08:41:38.724689 4741 generic.go:334] "Generic (PLEG): container finished" podID="86642216-602d-42f0-81f7-4834499a7539" containerID="678a131ee86705e16d118c95dc52468378f54397f39100e022a13a61cea8fc22" exitCode=0 Feb 26 08:41:38 crc kubenswrapper[4741]: I0226 08:41:38.724735 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerDied","Data":"678a131ee86705e16d118c95dc52468378f54397f39100e022a13a61cea8fc22"} Feb 26 08:41:39 crc kubenswrapper[4741]: I0226 08:41:39.746578 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerStarted","Data":"2b42200b74623c8541be3a474ac586277c03eee8a9333693733c5cb5f9f47a7c"} Feb 26 08:41:39 crc kubenswrapper[4741]: I0226 08:41:39.747413 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerStarted","Data":"961512aa86f71fcb7f86996329bae4343404592743d22cc136c4d6d99f257a2e"} Feb 26 08:41:40 crc kubenswrapper[4741]: I0226 08:41:40.256920 4741 scope.go:117] "RemoveContainer" containerID="405cf90d586a6a354f0011a8eb0de003afbe8f4e2792a05e389abae5575037aa" Feb 26 08:41:42 crc kubenswrapper[4741]: I0226 08:41:42.591415 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:43 crc kubenswrapper[4741]: I0226 08:41:43.852413 4741 generic.go:334] "Generic (PLEG): container finished" podID="86642216-602d-42f0-81f7-4834499a7539" containerID="05a28ce2fb11c6716fdfd3b5ec7204d0d1363f66b1c9de4094beaf75133c02ca" exitCode=0 Feb 26 08:41:43 crc kubenswrapper[4741]: I0226 08:41:43.852960 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerDied","Data":"05a28ce2fb11c6716fdfd3b5ec7204d0d1363f66b1c9de4094beaf75133c02ca"} Feb 26 08:41:43 crc kubenswrapper[4741]: I0226 08:41:43.922362 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" probeResult="failure" output=< Feb 26 08:41:43 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:41:43 crc kubenswrapper[4741]: > Feb 26 08:41:44 crc kubenswrapper[4741]: I0226 08:41:44.836834 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:44 crc kubenswrapper[4741]: I0226 08:41:44.838029 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5684558457-bgfq2" Feb 26 08:41:45 crc kubenswrapper[4741]: E0226 08:41:45.217051 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00d4f449_ee46_43e5_b427_4913f6e080e2.slice/crio-30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:41:45 crc kubenswrapper[4741]: I0226 08:41:45.892475 4741 generic.go:334] "Generic (PLEG): container finished" podID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerID="30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15" exitCode=137 Feb 26 08:41:45 crc kubenswrapper[4741]: I0226 08:41:45.892643 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerDied","Data":"30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15"} Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.781204 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.784375 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-log" containerID="cri-o://5ff9dd60095d3da8aa776b3fc7a726cf8f951fe745f5f8e9b6c4455cc9c0d517" gracePeriod=30 Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.784650 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-httpd" containerID="cri-o://0be46b640e17fc99fc27314c3e7c4a24df15f4ecd27fb7e60d39cf9922c4f98b" gracePeriod=30 Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.853455 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.910816 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976438 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976521 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfw4h\" (UniqueName: \"kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976553 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976627 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976722 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976803 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs\") pod \"86642216-602d-42f0-81f7-4834499a7539\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976852 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.976986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t65zr\" (UniqueName: \"kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr\") pod \"86642216-602d-42f0-81f7-4834499a7539\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.977039 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config\") pod \"86642216-602d-42f0-81f7-4834499a7539\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.977192 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config\") pod \"86642216-602d-42f0-81f7-4834499a7539\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.977222 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle\") pod \"86642216-602d-42f0-81f7-4834499a7539\" (UID: \"86642216-602d-42f0-81f7-4834499a7539\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.977243 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data\") pod \"00d4f449-ee46-43e5-b427-4913f6e080e2\" (UID: \"00d4f449-ee46-43e5-b427-4913f6e080e2\") " Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.980316 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.980822 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs" (OuterVolumeSpecName: "logs") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.992786 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h" (OuterVolumeSpecName: "kube-api-access-sfw4h") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "kube-api-access-sfw4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.996332 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "86642216-602d-42f0-81f7-4834499a7539" (UID: "86642216-602d-42f0-81f7-4834499a7539"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.996392 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:48 crc kubenswrapper[4741]: I0226 08:41:48.998920 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts" (OuterVolumeSpecName: "scripts") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.001337 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00d4f449-ee46-43e5-b427-4913f6e080e2","Type":"ContainerDied","Data":"a092e8d40078253d26b7fe6b430bc0d7061f0722f9b3d66c2908ea86c98ba0b5"} Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.001402 4741 scope.go:117] "RemoveContainer" containerID="30f37427f3dc6b3a43798d7945f5941527fc2e7ae3db75a529f93badcb114d15" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.001393 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.016869 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr" (OuterVolumeSpecName: "kube-api-access-t65zr") pod "86642216-602d-42f0-81f7-4834499a7539" (UID: "86642216-602d-42f0-81f7-4834499a7539"). InnerVolumeSpecName "kube-api-access-t65zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.022707 4741 generic.go:334] "Generic (PLEG): container finished" podID="c979493f-75ea-4b53-a806-87c225d5d936" containerID="5ff9dd60095d3da8aa776b3fc7a726cf8f951fe745f5f8e9b6c4455cc9c0d517" exitCode=143 Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.022857 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerDied","Data":"5ff9dd60095d3da8aa776b3fc7a726cf8f951fe745f5f8e9b6c4455cc9c0d517"} Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.035938 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864786857b-lmwmt" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.035993 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864786857b-lmwmt" event={"ID":"86642216-602d-42f0-81f7-4834499a7539","Type":"ContainerDied","Data":"f5b216f1e3786dc860a223903766f408aea5584a6f909da343bb7d339a6e6dfb"} Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.044171 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.049249 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerStarted","Data":"b9e2fa9f80a4da777f9e40e1c746995674137584845b0445b9150fbaa83ecc3e"} Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.076423 4741 scope.go:117] "RemoveContainer" containerID="ae7a971b4ef8f0f060397d684b322f96ee1232256011827233a62529971028c3" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082138 4741 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00d4f449-ee46-43e5-b427-4913f6e080e2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082171 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfw4h\" (UniqueName: \"kubernetes.io/projected/00d4f449-ee46-43e5-b427-4913f6e080e2-kube-api-access-sfw4h\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082184 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082196 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d4f449-ee46-43e5-b427-4913f6e080e2-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082205 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082216 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082226 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t65zr\" (UniqueName: \"kubernetes.io/projected/86642216-602d-42f0-81f7-4834499a7539-kube-api-access-t65zr\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.082236 4741 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.096462 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data" (OuterVolumeSpecName: "config-data") pod "00d4f449-ee46-43e5-b427-4913f6e080e2" (UID: "00d4f449-ee46-43e5-b427-4913f6e080e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.116242 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "86642216-602d-42f0-81f7-4834499a7539" (UID: "86642216-602d-42f0-81f7-4834499a7539"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.122226 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86642216-602d-42f0-81f7-4834499a7539" (UID: "86642216-602d-42f0-81f7-4834499a7539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.125993 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config" (OuterVolumeSpecName: "config") pod "86642216-602d-42f0-81f7-4834499a7539" (UID: "86642216-602d-42f0-81f7-4834499a7539"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.151993 4741 scope.go:117] "RemoveContainer" containerID="678a131ee86705e16d118c95dc52468378f54397f39100e022a13a61cea8fc22" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.185161 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.185201 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d4f449-ee46-43e5-b427-4913f6e080e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.185214 4741 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.185226 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/86642216-602d-42f0-81f7-4834499a7539-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.254454 4741 scope.go:117] "RemoveContainer" containerID="05a28ce2fb11c6716fdfd3b5ec7204d0d1363f66b1c9de4094beaf75133c02ca" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.390044 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.420256 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.469149 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.510394 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-864786857b-lmwmt"] Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.524021 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:49 crc kubenswrapper[4741]: E0226 08:41:49.524681 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-api" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.524704 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-api" Feb 26 08:41:49 crc kubenswrapper[4741]: E0226 08:41:49.524719 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.524726 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api" Feb 26 08:41:49 crc kubenswrapper[4741]: E0226 08:41:49.524747 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-httpd" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.524753 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-httpd" Feb 26 08:41:49 crc kubenswrapper[4741]: E0226 08:41:49.524766 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api-log" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.524772 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api-log" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.525025 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-httpd" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.525063 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.525072 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" containerName="cinder-api-log" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.525084 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="86642216-602d-42f0-81f7-4834499a7539" containerName="neutron-api" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.530795 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.534997 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.535415 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.535742 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.573407 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610705 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d83c4a22-8843-4882-9c41-0a5c11ba9dff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610777 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data-custom\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610823 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610882 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-scripts\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.610961 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.611001 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83c4a22-8843-4882-9c41-0a5c11ba9dff-logs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.611080 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.611121 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxtk\" (UniqueName: \"kubernetes.io/projected/d83c4a22-8843-4882-9c41-0a5c11ba9dff-kube-api-access-wwxtk\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.713968 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83c4a22-8843-4882-9c41-0a5c11ba9dff-logs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714275 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714330 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxtk\" (UniqueName: \"kubernetes.io/projected/d83c4a22-8843-4882-9c41-0a5c11ba9dff-kube-api-access-wwxtk\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714392 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d83c4a22-8843-4882-9c41-0a5c11ba9dff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714467 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data-custom\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714535 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714547 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83c4a22-8843-4882-9c41-0a5c11ba9dff-logs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714720 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714775 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-scripts\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.714917 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.715886 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d83c4a22-8843-4882-9c41-0a5c11ba9dff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.720710 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-scripts\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.720813 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data-custom\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.721671 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.722641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-config-data\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.723069 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.731805 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83c4a22-8843-4882-9c41-0a5c11ba9dff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.737645 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxtk\" (UniqueName: \"kubernetes.io/projected/d83c4a22-8843-4882-9c41-0a5c11ba9dff-kube-api-access-wwxtk\") pod \"cinder-api-0\" (UID: \"d83c4a22-8843-4882-9c41-0a5c11ba9dff\") " pod="openstack/cinder-api-0" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.787682 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:41:49 crc kubenswrapper[4741]: E0226 08:41:49.788214 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.805774 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d4f449-ee46-43e5-b427-4913f6e080e2" path="/var/lib/kubelet/pods/00d4f449-ee46-43e5-b427-4913f6e080e2/volumes" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.807486 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86642216-602d-42f0-81f7-4834499a7539" path="/var/lib/kubelet/pods/86642216-602d-42f0-81f7-4834499a7539/volumes" Feb 26 08:41:49 crc kubenswrapper[4741]: I0226 08:41:49.863346 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.070848 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9884c9db-d963-4349-8cbb-a4a72a81d8cc","Type":"ContainerStarted","Data":"d155d82005bb0ad30484b0d40267b4e82f208e91d8cbf4f1ecd860261c609914"} Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.115677 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.093870824 podStartE2EDuration="19.115626556s" podCreationTimestamp="2026-02-26 08:41:31 +0000 UTC" firstStartedPulling="2026-02-26 08:41:32.400586898 +0000 UTC m=+1727.396524285" lastFinishedPulling="2026-02-26 08:41:48.42234263 +0000 UTC m=+1743.418280017" observedRunningTime="2026-02-26 08:41:50.091353726 +0000 UTC m=+1745.087291113" watchObservedRunningTime="2026-02-26 08:41:50.115626556 +0000 UTC m=+1745.111563943" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.468718 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.472852 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.480544 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-r89pd" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.480766 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.480877 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.516082 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.532416 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.540279 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.542508 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xhr\" (UniqueName: \"kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.542573 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.542638 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.637255 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.639585 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.645124 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xhr\" (UniqueName: \"kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.645203 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.645251 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.645363 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.658843 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.673742 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.676569 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.717188 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750268 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750333 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750410 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmkm9\" (UniqueName: \"kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750451 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750698 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.750744 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.753952 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.756002 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.758003 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.781095 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xhr\" (UniqueName: \"kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr\") pod \"heat-engine-55d8cd5998-s5j8z\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.837800 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861238 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861287 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861441 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmkm9\" (UniqueName: \"kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861505 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861620 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkr5\" (UniqueName: \"kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.861863 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.862023 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.862212 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.862253 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.862294 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.865282 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.865885 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.873615 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.875869 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.882053 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.911813 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.951225 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmkm9\" (UniqueName: \"kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9\") pod \"dnsmasq-dns-f6bc4c6c9-878nl\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.970005 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkr5\" (UniqueName: \"kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.971546 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.971717 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.971938 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.976385 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.983266 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:50 crc kubenswrapper[4741]: I0226 08:41:50.996881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.014960 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkr5\" (UniqueName: \"kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5\") pod \"heat-cfnapi-6b8994dd55-drsrx\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.025791 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.027679 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.030131 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.065439 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.080865 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdqtv\" (UniqueName: \"kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.080984 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.081195 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.081437 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.090552 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.141356 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.145832 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d83c4a22-8843-4882-9c41-0a5c11ba9dff","Type":"ContainerStarted","Data":"5b0a4d421901ec91b2347b3f88ee90e7e8086017f81c331dee9a810fb13a4bcc"} Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.185897 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.186233 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.186625 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.189282 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqtv\" (UniqueName: \"kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.212903 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.219085 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.219845 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.221398 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdqtv\" (UniqueName: \"kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv\") pod \"heat-api-5cb7574d9b-7lwbl\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.505891 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:51 crc kubenswrapper[4741]: I0226 08:41:51.886711 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.045743 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.203:9292/healthcheck\": read tcp 10.217.0.2:48698->10.217.0.203:9292: read: connection reset by peer" Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.045985 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.203:9292/healthcheck\": read tcp 10.217.0.2:48694->10.217.0.203:9292: read: connection reset by peer" Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.057866 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.190512 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.194763 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" event={"ID":"3bf01190-91b0-483f-a7fa-a05dff13c5c0","Type":"ContainerStarted","Data":"6c9d6eb266d1c3cf2fe24a468878390c1c2405cd20a16ddfa7bc7ce9f0f439c0"} Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.205598 4741 generic.go:334] "Generic (PLEG): container finished" podID="c979493f-75ea-4b53-a806-87c225d5d936" containerID="0be46b640e17fc99fc27314c3e7c4a24df15f4ecd27fb7e60d39cf9922c4f98b" exitCode=0 Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.205704 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerDied","Data":"0be46b640e17fc99fc27314c3e7c4a24df15f4ecd27fb7e60d39cf9922c4f98b"} Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.231848 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55d8cd5998-s5j8z" event={"ID":"5a70d076-17f1-4f3e-bb9c-0f5740d59c27","Type":"ContainerStarted","Data":"c8d4f6de392c68f45d0dbb09fb9d601b40d963e03482ddccc436a6b918309be9"} Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.242205 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerStarted","Data":"4c3005f273974cd0fdeed189a0c8bf7f50189fe7415c19858716cc343d5225b7"} Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.242443 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-central-agent" containerID="cri-o://961512aa86f71fcb7f86996329bae4343404592743d22cc136c4d6d99f257a2e" gracePeriod=30 Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.242543 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.243089 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="proxy-httpd" containerID="cri-o://4c3005f273974cd0fdeed189a0c8bf7f50189fe7415c19858716cc343d5225b7" gracePeriod=30 Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.243155 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="sg-core" containerID="cri-o://b9e2fa9f80a4da777f9e40e1c746995674137584845b0445b9150fbaa83ecc3e" gracePeriod=30 Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.243202 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-notification-agent" containerID="cri-o://2b42200b74623c8541be3a474ac586277c03eee8a9333693733c5cb5f9f47a7c" gracePeriod=30 Feb 26 08:41:52 crc kubenswrapper[4741]: W0226 08:41:52.248648 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8013599d_c0ae_43ba_ae68_bbecc6acfa6b.slice/crio-37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8 WatchSource:0}: Error finding container 37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8: Status 404 returned error can't find the container with id 37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8 Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.301744 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.434185952 podStartE2EDuration="16.301707296s" podCreationTimestamp="2026-02-26 08:41:36 +0000 UTC" firstStartedPulling="2026-02-26 08:41:37.879772613 +0000 UTC m=+1732.875710000" lastFinishedPulling="2026-02-26 08:41:50.747293957 +0000 UTC m=+1745.743231344" observedRunningTime="2026-02-26 08:41:52.289217411 +0000 UTC m=+1747.285154808" watchObservedRunningTime="2026-02-26 08:41:52.301707296 +0000 UTC m=+1747.297644683" Feb 26 08:41:52 crc kubenswrapper[4741]: I0226 08:41:52.440805 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.055451 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.131633 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.160191 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.160338 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.160403 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.161139 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.161192 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.161221 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.161422 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.161501 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pffrx\" (UniqueName: \"kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx\") pod \"c979493f-75ea-4b53-a806-87c225d5d936\" (UID: \"c979493f-75ea-4b53-a806-87c225d5d936\") " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.167184 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.169208 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs" (OuterVolumeSpecName: "logs") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.172608 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx" (OuterVolumeSpecName: "kube-api-access-pffrx") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "kube-api-access-pffrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.179331 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts" (OuterVolumeSpecName: "scripts") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.252239 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab" (OuterVolumeSpecName: "glance") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "pvc-66d3046e-b0f1-49dc-a936-827184187eab". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.256361 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276577 4741 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276629 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") on node \"crc\" " Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276641 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276651 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c979493f-75ea-4b53-a806-87c225d5d936-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276660 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pffrx\" (UniqueName: \"kubernetes.io/projected/c979493f-75ea-4b53-a806-87c225d5d936-kube-api-access-pffrx\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.276670 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.316233 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data" (OuterVolumeSpecName: "config-data") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.322513 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c979493f-75ea-4b53-a806-87c225d5d936" (UID: "c979493f-75ea-4b53-a806-87c225d5d936"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.325990 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d83c4a22-8843-4882-9c41-0a5c11ba9dff","Type":"ContainerStarted","Data":"e4c83358210c2d0f3e02e0600a8edfd7b1a91e4ca202e8f47bd3d4c0c2099c7e"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.336361 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.342356 4741 generic.go:334] "Generic (PLEG): container finished" podID="b16de433-98df-4b38-8069-159ccec9435b" containerID="4c3005f273974cd0fdeed189a0c8bf7f50189fe7415c19858716cc343d5225b7" exitCode=0 Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.342385 4741 generic.go:334] "Generic (PLEG): container finished" podID="b16de433-98df-4b38-8069-159ccec9435b" containerID="b9e2fa9f80a4da777f9e40e1c746995674137584845b0445b9150fbaa83ecc3e" exitCode=2 Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.342433 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerDied","Data":"4c3005f273974cd0fdeed189a0c8bf7f50189fe7415c19858716cc343d5225b7"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.342461 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerDied","Data":"b9e2fa9f80a4da777f9e40e1c746995674137584845b0445b9150fbaa83ecc3e"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.343192 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.343344 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-66d3046e-b0f1-49dc-a936-827184187eab" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab") on node "crc" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.349390 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c979493f-75ea-4b53-a806-87c225d5d936","Type":"ContainerDied","Data":"9faf2ab54474609813abf9bdd4735d6869f6480f10cf0ff4197be11030a661a3"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.349479 4741 scope.go:117] "RemoveContainer" containerID="0be46b640e17fc99fc27314c3e7c4a24df15f4ecd27fb7e60d39cf9922c4f98b" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.349672 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.372174 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" event={"ID":"8013599d-c0ae-43ba-ae68-bbecc6acfa6b","Type":"ContainerStarted","Data":"37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.391728 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.391763 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c979493f-75ea-4b53-a806-87c225d5d936-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.391775 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.422661 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55d8cd5998-s5j8z" event={"ID":"5a70d076-17f1-4f3e-bb9c-0f5740d59c27","Type":"ContainerStarted","Data":"505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.423595 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.443043 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.443187 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cb7574d9b-7lwbl" event={"ID":"f3252251-2856-49ae-954d-ad40716b99e8","Type":"ContainerStarted","Data":"fd18f130fa54871c909ca28c93cbcaea60c94a1e22e684131e9a1ea77a4850a1"} Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.467571 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.488178 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.522997 4741 scope.go:117] "RemoveContainer" containerID="5ff9dd60095d3da8aa776b3fc7a726cf8f951fe745f5f8e9b6c4455cc9c0d517" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.535427 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:53 crc kubenswrapper[4741]: E0226 08:41:53.536097 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-log" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.571167 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-log" Feb 26 08:41:53 crc kubenswrapper[4741]: E0226 08:41:53.571232 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-httpd" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.571246 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-httpd" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.571832 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-httpd" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.571854 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c979493f-75ea-4b53-a806-87c225d5d936" containerName="glance-log" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.569681 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-55d8cd5998-s5j8z" podStartSLOduration=3.569651238 podStartE2EDuration="3.569651238s" podCreationTimestamp="2026-02-26 08:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:53.4554293 +0000 UTC m=+1748.451366687" watchObservedRunningTime="2026-02-26 08:41:53.569651238 +0000 UTC m=+1748.565588625" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.587621 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.590028 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.592303 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.659101 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714477 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714566 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714611 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714644 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714664 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714705 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714803 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dl6q\" (UniqueName: \"kubernetes.io/projected/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-kube-api-access-4dl6q\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.714872 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.821932 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822403 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dl6q\" (UniqueName: \"kubernetes.io/projected/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-kube-api-access-4dl6q\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822508 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822579 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822641 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822694 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822738 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.822765 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.823270 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.824224 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.839508 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.840577 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.840627 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/417e49831a1cdf1c972733aab859fe5a1181b30877fdb96884b853b109c5ec95/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.847715 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.870173 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.876576 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.876956 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dl6q\" (UniqueName: \"kubernetes.io/projected/f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e-kube-api-access-4dl6q\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.915389 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c979493f-75ea-4b53-a806-87c225d5d936" path="/var/lib/kubelet/pods/c979493f-75ea-4b53-a806-87c225d5d936/volumes" Feb 26 08:41:53 crc kubenswrapper[4741]: I0226 08:41:53.990490 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66d3046e-b0f1-49dc-a936-827184187eab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d3046e-b0f1-49dc-a936-827184187eab\") pod \"glance-default-internal-api-0\" (UID: \"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e\") " pod="openstack/glance-default-internal-api-0" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.273651 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467292 4741 generic.go:334] "Generic (PLEG): container finished" podID="b16de433-98df-4b38-8069-159ccec9435b" containerID="2b42200b74623c8541be3a474ac586277c03eee8a9333693733c5cb5f9f47a7c" exitCode=0 Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467326 4741 generic.go:334] "Generic (PLEG): container finished" podID="b16de433-98df-4b38-8069-159ccec9435b" containerID="961512aa86f71fcb7f86996329bae4343404592743d22cc136c4d6d99f257a2e" exitCode=0 Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467382 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerDied","Data":"2b42200b74623c8541be3a474ac586277c03eee8a9333693733c5cb5f9f47a7c"} Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467423 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerDied","Data":"961512aa86f71fcb7f86996329bae4343404592743d22cc136c4d6d99f257a2e"} Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467440 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b16de433-98df-4b38-8069-159ccec9435b","Type":"ContainerDied","Data":"1462f0ebc9cb2c73dc9fde02f652d0bb34b6b62ce5028768bc1380d0a56ca545"} Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.467451 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1462f0ebc9cb2c73dc9fde02f652d0bb34b6b62ce5028768bc1380d0a56ca545" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.476009 4741 generic.go:334] "Generic (PLEG): container finished" podID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerID="79568d53e01bcf164fb686f3623bc2f2942a086e6c311dfdea0c46c457ed4648" exitCode=0 Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.477759 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" event={"ID":"3bf01190-91b0-483f-a7fa-a05dff13c5c0","Type":"ContainerDied","Data":"79568d53e01bcf164fb686f3623bc2f2942a086e6c311dfdea0c46c457ed4648"} Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.532775 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mjbs" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" containerID="cri-o://556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52" gracePeriod=2 Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.533602 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.548654 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.548751 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.548804 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.548911 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7s9\" (UniqueName: \"kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.548933 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.549097 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.549157 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd\") pod \"b16de433-98df-4b38-8069-159ccec9435b\" (UID: \"b16de433-98df-4b38-8069-159ccec9435b\") " Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.549238 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d83c4a22-8843-4882-9c41-0a5c11ba9dff","Type":"ContainerStarted","Data":"ba1c40768fa083d11912aaf320d2d6aaae047b41643623c5d209105732aa4b44"} Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.549358 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.549907 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.555574 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.558831 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.575262 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts" (OuterVolumeSpecName: "scripts") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.576868 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9" (OuterVolumeSpecName: "kube-api-access-kx7s9") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "kube-api-access-kx7s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.584909 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.584883535 podStartE2EDuration="5.584883535s" podCreationTimestamp="2026-02-26 08:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:54.575758925 +0000 UTC m=+1749.571696322" watchObservedRunningTime="2026-02-26 08:41:54.584883535 +0000 UTC m=+1749.580820922" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.659477 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7s9\" (UniqueName: \"kubernetes.io/projected/b16de433-98df-4b38-8069-159ccec9435b-kube-api-access-kx7s9\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.659512 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.659523 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b16de433-98df-4b38-8069-159ccec9435b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.718275 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.767437 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.805422 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.869786 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data" (OuterVolumeSpecName: "config-data") pod "b16de433-98df-4b38-8069-159ccec9435b" (UID: "b16de433-98df-4b38-8069-159ccec9435b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.871905 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:54 crc kubenswrapper[4741]: I0226 08:41:54.871951 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16de433-98df-4b38-8069-159ccec9435b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.127084 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.209273 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.255209 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.358910 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7566dbcd8b-6qsnk" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.420016 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kz4j\" (UniqueName: \"kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j\") pod \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.422568 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities" (OuterVolumeSpecName: "utilities") pod "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" (UID: "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.447682 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j" (OuterVolumeSpecName: "kube-api-access-2kz4j") pod "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" (UID: "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba"). InnerVolumeSpecName "kube-api-access-2kz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.420211 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities\") pod \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.457088 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content\") pod \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\" (UID: \"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba\") " Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.460920 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.461240 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d788959bb-k7x27" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-log" containerID="cri-o://361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f" gracePeriod=30 Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.461409 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d788959bb-k7x27" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-api" containerID="cri-o://ff35a78726fff2360bc9fbdccade7edc73ebd043bc0c75b111328b4095a4f92c" gracePeriod=30 Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.465713 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kz4j\" (UniqueName: \"kubernetes.io/projected/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-kube-api-access-2kz4j\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.465743 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.562612 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e","Type":"ContainerStarted","Data":"f704a34ebb45b803877e1766f40ca0d0e17ff75aeca4cc357863318b29b0cb13"} Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.569230 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" event={"ID":"3bf01190-91b0-483f-a7fa-a05dff13c5c0","Type":"ContainerStarted","Data":"9b70af1d892965234405f30c436883b4682fbf49e94c4d6c5ce8e4da2cb456d7"} Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.570455 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.595769 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerID="556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52" exitCode=0 Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.595943 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.596790 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerDied","Data":"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52"} Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.596862 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mjbs" event={"ID":"9f09f7b6-8e20-42c2-b693-57b0ad33e0ba","Type":"ContainerDied","Data":"bfb7dadaad0f856aca42ce3e7764c790d023815ab1602bc52c98c313076a43e1"} Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.596886 4741 scope.go:117] "RemoveContainer" containerID="556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.597154 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mjbs" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.609097 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" podStartSLOduration=5.609067577 podStartE2EDuration="5.609067577s" podCreationTimestamp="2026-02-26 08:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:55.596142049 +0000 UTC m=+1750.592079436" watchObservedRunningTime="2026-02-26 08:41:55.609067577 +0000 UTC m=+1750.605004964" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.686635 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.709672 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.732937 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.734897 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="extract-content" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.735102 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="extract-content" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736133 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.736224 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736300 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="proxy-httpd" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.736364 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="proxy-httpd" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736457 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-notification-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.736538 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-notification-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736629 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="sg-core" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.736734 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="sg-core" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736825 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-central-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.736891 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-central-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.736974 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="extract-utilities" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.737040 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="extract-utilities" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.737596 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-notification-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.739662 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="sg-core" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.739741 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.739763 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="ceilometer-central-agent" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.739788 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16de433-98df-4b38-8069-159ccec9435b" containerName="proxy-httpd" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.741020 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.741041 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: E0226 08:41:55.741325 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf6f5c31_f24d_4815_a2ed_0452b4a255b5.slice/crio-conmon-361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf6f5c31_f24d_4815_a2ed_0452b4a255b5.slice/crio-361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.741407 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" containerName="registry-server" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.746522 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.764233 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.775766 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.780048 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.810139 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" (UID: "9f09f7b6-8e20-42c2-b693-57b0ad33e0ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.824795 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.918507 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16de433-98df-4b38-8069-159ccec9435b" path="/var/lib/kubelet/pods/b16de433-98df-4b38-8069-159ccec9435b/volumes" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.934103 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.934198 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.934228 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.934617 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.934813 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.935793 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:55 crc kubenswrapper[4741]: I0226 08:41:55.936264 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjzn\" (UniqueName: \"kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.033128 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039005 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjzn\" (UniqueName: \"kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039204 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039243 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039264 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039286 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039327 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039411 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.039959 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.040979 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.048180 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.064296 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.066225 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjzn\" (UniqueName: \"kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.066387 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.068859 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7mjbs"] Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.073208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.085070 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.085412 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-log" containerID="cri-o://a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25" gracePeriod=30 Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.085601 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-httpd" containerID="cri-o://655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853" gracePeriod=30 Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.116929 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.415723 4741 scope.go:117] "RemoveContainer" containerID="c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8" Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.615482 4741 generic.go:334] "Generic (PLEG): container finished" podID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerID="361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f" exitCode=143 Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.615576 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerDied","Data":"361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f"} Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.619458 4741 generic.go:334] "Generic (PLEG): container finished" podID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerID="a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25" exitCode=143 Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.619547 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerDied","Data":"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25"} Feb 26 08:41:56 crc kubenswrapper[4741]: I0226 08:41:56.628671 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e","Type":"ContainerStarted","Data":"54c57e03850d9fe54d28cfb78d8a3581ea61e16815a52977cdce0546ce1ba698"} Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.150161 4741 scope.go:117] "RemoveContainer" containerID="488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.448859 4741 scope.go:117] "RemoveContainer" containerID="3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.536685 4741 scope.go:117] "RemoveContainer" containerID="556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52" Feb 26 08:41:57 crc kubenswrapper[4741]: E0226 08:41:57.540735 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52\": container with ID starting with 556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52 not found: ID does not exist" containerID="556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.540793 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52"} err="failed to get container status \"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52\": rpc error: code = NotFound desc = could not find container \"556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52\": container with ID starting with 556478990e2477a025138b7fdd8fd30b577013a5cfa487a8f52da05891c49a52 not found: ID does not exist" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.540826 4741 scope.go:117] "RemoveContainer" containerID="c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8" Feb 26 08:41:57 crc kubenswrapper[4741]: E0226 08:41:57.547334 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8\": container with ID starting with c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8 not found: ID does not exist" containerID="c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.547394 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8"} err="failed to get container status \"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8\": rpc error: code = NotFound desc = could not find container \"c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8\": container with ID starting with c51608d0af0aec438b79c05272fed98a8b8fd9cf4123c86508db1740c276b0c8 not found: ID does not exist" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.547429 4741 scope.go:117] "RemoveContainer" containerID="488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a" Feb 26 08:41:57 crc kubenswrapper[4741]: E0226 08:41:57.555221 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a\": container with ID starting with 488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a not found: ID does not exist" containerID="488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.555304 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a"} err="failed to get container status \"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a\": rpc error: code = NotFound desc = could not find container \"488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a\": container with ID starting with 488c2db191d6535ba4596845eb0a78f1f45de668a17c5c1e09ca6eadfe289f4a not found: ID does not exist" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.555348 4741 scope.go:117] "RemoveContainer" containerID="3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb" Feb 26 08:41:57 crc kubenswrapper[4741]: E0226 08:41:57.555860 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb\": container with ID starting with 3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb not found: ID does not exist" containerID="3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.555901 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb"} err="failed to get container status \"3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb\": rpc error: code = NotFound desc = could not find container \"3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb\": container with ID starting with 3306fde0345fef6721dbae9524663ccabadc21490ae3881e1b028963169b4cfb not found: ID does not exist" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.701715 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" event={"ID":"8013599d-c0ae-43ba-ae68-bbecc6acfa6b","Type":"ContainerStarted","Data":"4ba1cecc1f47aa6cb49aa2e08ef9178d0be835fc899c87cc09188bcea039bdad"} Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.702217 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.738737 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" podStartSLOduration=2.863189231 podStartE2EDuration="7.738684459s" podCreationTimestamp="2026-02-26 08:41:50 +0000 UTC" firstStartedPulling="2026-02-26 08:41:52.274888083 +0000 UTC m=+1747.270825470" lastFinishedPulling="2026-02-26 08:41:57.150383311 +0000 UTC m=+1752.146320698" observedRunningTime="2026-02-26 08:41:57.723554839 +0000 UTC m=+1752.719492246" watchObservedRunningTime="2026-02-26 08:41:57.738684459 +0000 UTC m=+1752.734621846" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.809681 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f09f7b6-8e20-42c2-b693-57b0ad33e0ba" path="/var/lib/kubelet/pods/9f09f7b6-8e20-42c2-b693-57b0ad33e0ba/volumes" Feb 26 08:41:57 crc kubenswrapper[4741]: I0226 08:41:57.836156 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.716845 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cb7574d9b-7lwbl" event={"ID":"f3252251-2856-49ae-954d-ad40716b99e8","Type":"ContainerStarted","Data":"78181bc545e58de3c5e121c4b65279bf6cafd4e8b27f0c1a6c91d63b3a7c161c"} Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.717737 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.723174 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e","Type":"ContainerStarted","Data":"9e0652b1fbbb59ef942f84e6b833523f9ff7591dad7954d6404907d5e5c43c59"} Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.728121 4741 generic.go:334] "Generic (PLEG): container finished" podID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerID="ff35a78726fff2360bc9fbdccade7edc73ebd043bc0c75b111328b4095a4f92c" exitCode=0 Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.728220 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerDied","Data":"ff35a78726fff2360bc9fbdccade7edc73ebd043bc0c75b111328b4095a4f92c"} Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.731943 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerStarted","Data":"db3913af3c08bbde3a6433140796deacda01124a7a8dd2b2398c203b1b4634fd"} Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.751735 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cb7574d9b-7lwbl" podStartSLOduration=3.989463474 podStartE2EDuration="8.750103908s" podCreationTimestamp="2026-02-26 08:41:50 +0000 UTC" firstStartedPulling="2026-02-26 08:41:52.478490321 +0000 UTC m=+1747.474427708" lastFinishedPulling="2026-02-26 08:41:57.239130765 +0000 UTC m=+1752.235068142" observedRunningTime="2026-02-26 08:41:58.735354529 +0000 UTC m=+1753.731291916" watchObservedRunningTime="2026-02-26 08:41:58.750103908 +0000 UTC m=+1753.746041295" Feb 26 08:41:58 crc kubenswrapper[4741]: I0226 08:41:58.797632 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.797600409 podStartE2EDuration="5.797600409s" podCreationTimestamp="2026-02-26 08:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:41:58.757680994 +0000 UTC m=+1753.753618381" watchObservedRunningTime="2026-02-26 08:41:58.797600409 +0000 UTC m=+1753.793537796" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.261332 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365348 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365458 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365568 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365598 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365643 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtrmj\" (UniqueName: \"kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365722 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.365774 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data\") pod \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\" (UID: \"bf6f5c31-f24d-4815-a2ed-0452b4a255b5\") " Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.366612 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs" (OuterVolumeSpecName: "logs") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.389206 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts" (OuterVolumeSpecName: "scripts") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.390374 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj" (OuterVolumeSpecName: "kube-api-access-rtrmj") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "kube-api-access-rtrmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.470049 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtrmj\" (UniqueName: \"kubernetes.io/projected/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-kube-api-access-rtrmj\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.470098 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.470126 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.604426 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:41:59 crc kubenswrapper[4741]: E0226 08:41:59.605276 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-api" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.605297 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-api" Feb 26 08:41:59 crc kubenswrapper[4741]: E0226 08:41:59.605318 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-log" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.605325 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-log" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.605616 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-api" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.605657 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" containerName="placement-log" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.606751 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.618358 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.655258 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.657016 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.685310 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.692539 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.692729 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.692769 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.693000 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpfs\" (UniqueName: \"kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.693314 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.693899 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data" (OuterVolumeSpecName: "config-data") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.707195 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.722675 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.733716 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.736822 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.781966 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798689 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpfs\" (UniqueName: \"kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798760 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798799 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798831 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9n6g\" (UniqueName: \"kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798900 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798932 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798957 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.798989 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799038 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2sw\" (UniqueName: \"kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799058 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799081 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799126 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799191 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.799203 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.806719 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.810594 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d788959bb-k7x27" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.812407 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.824903 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpfs\" (UniqueName: \"kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.829148 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom\") pod \"heat-engine-654898f896-cnwpl\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.843790 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf6f5c31-f24d-4815-a2ed-0452b4a255b5" (UID: "bf6f5c31-f24d-4815-a2ed-0452b4a255b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.862266 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d788959bb-k7x27" event={"ID":"bf6f5c31-f24d-4815-a2ed-0452b4a255b5","Type":"ContainerDied","Data":"ab8348fb2ebf9fbcdbcb4415d35d39ee548939bc58ff89a9365211abb8071258"} Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.862549 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerStarted","Data":"6cde56754bc3729ade3971a555170a3c1556cad73f23a985e9b6b902313b3484"} Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.862632 4741 scope.go:117] "RemoveContainer" containerID="ff35a78726fff2360bc9fbdccade7edc73ebd043bc0c75b111328b4095a4f92c" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.896644 4741 scope.go:117] "RemoveContainer" containerID="361299b8f26873b7e17b5f0dbc5e4e44625d859b02e1a5c0f0835a98a84e4c4f" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.901991 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902121 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902174 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9n6g\" (UniqueName: \"kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902321 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902371 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902421 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902526 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd2sw\" (UniqueName: \"kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902581 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.902692 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6f5c31-f24d-4815-a2ed-0452b4a255b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.906047 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.907439 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.909046 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.910674 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.913927 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.917817 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.926272 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd2sw\" (UniqueName: \"kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw\") pod \"heat-api-5df9b866b6-8f6jq\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:41:59 crc kubenswrapper[4741]: I0226 08:41:59.926604 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9n6g\" (UniqueName: \"kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g\") pod \"heat-cfnapi-6b8d584bd4-g6p2l\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.011552 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.012014 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.115649 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.200350 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534922-4g5sp"] Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.203254 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.207899 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.208174 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.208318 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.223695 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534922-4g5sp"] Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.239157 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85l9\" (UniqueName: \"kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9\") pod \"auto-csr-approver-29534922-4g5sp\" (UID: \"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75\") " pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.342408 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85l9\" (UniqueName: \"kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9\") pod \"auto-csr-approver-29534922-4g5sp\" (UID: \"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75\") " pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.371219 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85l9\" (UniqueName: \"kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9\") pod \"auto-csr-approver-29534922-4g5sp\" (UID: \"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75\") " pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.502314 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.588303 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.646889 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d788959bb-k7x27"] Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.847051 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.880217 4741 generic.go:334] "Generic (PLEG): container finished" podID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerID="655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853" exitCode=0 Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.880524 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerDied","Data":"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853"} Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.880562 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09de7a93-fe79-457d-8cdf-e710ca54e91a","Type":"ContainerDied","Data":"14e13d3a04d78dc5855a333c9dba693ba435435dc8b8e2b7700d6d62889e4eff"} Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.880583 4741 scope.go:117] "RemoveContainer" containerID="655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.880766 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.904830 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerStarted","Data":"860a4e1390092d0be3c5e1e9988138441cff65dbe008d5ef765d8c9d68948d5e"} Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.911239 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.911319 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.911358 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.911462 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.912972 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs" (OuterVolumeSpecName: "logs") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.915537 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.915650 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.915697 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.915848 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfbnl\" (UniqueName: \"kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl\") pod \"09de7a93-fe79-457d-8cdf-e710ca54e91a\" (UID: \"09de7a93-fe79-457d-8cdf-e710ca54e91a\") " Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.929054 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.955817 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts" (OuterVolumeSpecName: "scripts") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.961753 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl" (OuterVolumeSpecName: "kube-api-access-lfbnl") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "kube-api-access-lfbnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.987625 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3" (OuterVolumeSpecName: "glance") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:42:00 crc kubenswrapper[4741]: I0226 08:42:00.997276 4741 scope.go:117] "RemoveContainer" containerID="a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.024927 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") on node \"crc\" " Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.024963 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfbnl\" (UniqueName: \"kubernetes.io/projected/09de7a93-fe79-457d-8cdf-e710ca54e91a-kube-api-access-lfbnl\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.024977 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.024985 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.024994 4741 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09de7a93-fe79-457d-8cdf-e710ca54e91a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.029356 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.066011 4741 scope.go:117] "RemoveContainer" containerID="655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853" Feb 26 08:42:01 crc kubenswrapper[4741]: E0226 08:42:01.067652 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853\": container with ID starting with 655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853 not found: ID does not exist" containerID="655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.067716 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853"} err="failed to get container status \"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853\": rpc error: code = NotFound desc = could not find container \"655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853\": container with ID starting with 655ea2dd07b9e6afe9f707b9e0e738e0786edbe7b6a3921a1ad80dcc06a4f853 not found: ID does not exist" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.067753 4741 scope.go:117] "RemoveContainer" containerID="a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25" Feb 26 08:42:01 crc kubenswrapper[4741]: E0226 08:42:01.071953 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25\": container with ID starting with a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25 not found: ID does not exist" containerID="a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.072013 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25"} err="failed to get container status \"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25\": rpc error: code = NotFound desc = could not find container \"a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25\": container with ID starting with a39cd06c7c4d4de510a2ea0012e028f74d282f530f9c1405f4ac3c898cd66b25 not found: ID does not exist" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.095252 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.134936 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.145581 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.147385 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data" (OuterVolumeSpecName: "config-data") pod "09de7a93-fe79-457d-8cdf-e710ca54e91a" (UID: "09de7a93-fe79-457d-8cdf-e710ca54e91a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.197314 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.197537 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3") on node "crc" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.240425 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.240470 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.240491 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09de7a93-fe79-457d-8cdf-e710ca54e91a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.265319 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.265716 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="dnsmasq-dns" containerID="cri-o://0efab2efc81d97c047ef4217bb3aa5bcdc0dd56b46133d9909f14eb8b5e911a5" gracePeriod=10 Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.340079 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.380375 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.415582 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.582250 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.617771 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.651573 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:42:01 crc kubenswrapper[4741]: E0226 08:42:01.656328 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-log" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.656366 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-log" Feb 26 08:42:01 crc kubenswrapper[4741]: E0226 08:42:01.656406 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-httpd" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.656413 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-httpd" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.656785 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-httpd" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.656811 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" containerName="glance-log" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.658462 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.664026 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.664684 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.702788 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.766173 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.767517 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.767755 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.767848 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.767979 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.768054 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.768213 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9878s\" (UniqueName: \"kubernetes.io/projected/e1cfca6c-9dec-48b7-a390-17450189e9bb-kube-api-access-9878s\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.768353 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-logs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.835987 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09de7a93-fe79-457d-8cdf-e710ca54e91a" path="/var/lib/kubelet/pods/09de7a93-fe79-457d-8cdf-e710ca54e91a/volumes" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.844710 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6f5c31-f24d-4815-a2ed-0452b4a255b5" path="/var/lib/kubelet/pods/bf6f5c31-f24d-4815-a2ed-0452b4a255b5/volumes" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.849681 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534922-4g5sp"] Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873419 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873482 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873568 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9878s\" (UniqueName: \"kubernetes.io/projected/e1cfca6c-9dec-48b7-a390-17450189e9bb-kube-api-access-9878s\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873635 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-logs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873787 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873903 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.873935 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.875668 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-logs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.878005 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1cfca6c-9dec-48b7-a390-17450189e9bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.884824 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.888329 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.889033 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.893520 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.893599 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66385550d4571b43986ab0832fdd5f11a5f2b8cdc4d8f3b6edc74982f484140d/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.906633 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1cfca6c-9dec-48b7-a390-17450189e9bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.906880 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9878s\" (UniqueName: \"kubernetes.io/projected/e1cfca6c-9dec-48b7-a390-17450189e9bb-kube-api-access-9878s\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.953683 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-654898f896-cnwpl" event={"ID":"fdf44a23-6035-426e-b4ab-dc1bccedd505","Type":"ContainerStarted","Data":"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648"} Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.953743 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-654898f896-cnwpl" event={"ID":"fdf44a23-6035-426e-b4ab-dc1bccedd505","Type":"ContainerStarted","Data":"375b941d713851effd24a32ade82beeff2ee6d1779a2c537337c3e447bcffc68"} Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.955379 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:42:01 crc kubenswrapper[4741]: I0226 08:42:01.973041 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" event={"ID":"38f236a3-4736-4131-8e96-130f6aede3f2","Type":"ContainerStarted","Data":"6c06e7e0c37fa05394d0234156e8fc3dfb50eb277fa7ddae1b6febc8a6f00540"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.006740 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerStarted","Data":"4d8a085b66a5e51fce4ece5736e031d65acf2d8fd82ff40afbb093e0006df4f7"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.013706 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" event={"ID":"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75","Type":"ContainerStarted","Data":"a06a72b3ba646b3024632b0fea886bc887e76ed9ab02dd007e8cd41220dcfcc9"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.030096 4741 generic.go:334] "Generic (PLEG): container finished" podID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerID="0efab2efc81d97c047ef4217bb3aa5bcdc0dd56b46133d9909f14eb8b5e911a5" exitCode=0 Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.044968 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" event={"ID":"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2","Type":"ContainerDied","Data":"0efab2efc81d97c047ef4217bb3aa5bcdc0dd56b46133d9909f14eb8b5e911a5"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.047355 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5df9b866b6-8f6jq" event={"ID":"54aa908b-305e-4137-a861-2fac8d3c46aa","Type":"ContainerStarted","Data":"25bd3a27bb4b8057e4efcc182cef0ef8a5409d3d7a11c82f12a1e33e2235d2a5"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.047419 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5df9b866b6-8f6jq" event={"ID":"54aa908b-305e-4137-a861-2fac8d3c46aa","Type":"ContainerStarted","Data":"b76837132731e4d4910563f9428422758c897045e710924ec0ac618e7149b036"} Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.051225 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.111844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b28031a7-b9d0-4a54-9834-203ba9a791a3\") pod \"glance-default-external-api-0\" (UID: \"e1cfca6c-9dec-48b7-a390-17450189e9bb\") " pod="openstack/glance-default-external-api-0" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.122761 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-654898f896-cnwpl" podStartSLOduration=3.122724725 podStartE2EDuration="3.122724725s" podCreationTimestamp="2026-02-26 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:01.980562433 +0000 UTC m=+1756.976499820" watchObservedRunningTime="2026-02-26 08:42:02.122724725 +0000 UTC m=+1757.118662112" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.165749 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5df9b866b6-8f6jq" podStartSLOduration=3.165716137 podStartE2EDuration="3.165716137s" podCreationTimestamp="2026-02-26 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:02.08598007 +0000 UTC m=+1757.081917467" watchObservedRunningTime="2026-02-26 08:42:02.165716137 +0000 UTC m=+1757.161653524" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.297289 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.307360 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.411828 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.411925 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5vgl\" (UniqueName: \"kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.412071 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.412096 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.412175 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.412318 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config\") pod \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\" (UID: \"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2\") " Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.422820 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl" (OuterVolumeSpecName: "kube-api-access-b5vgl") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "kube-api-access-b5vgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.493957 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.523266 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5vgl\" (UniqueName: \"kubernetes.io/projected/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-kube-api-access-b5vgl\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.523477 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.536742 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.536791 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config" (OuterVolumeSpecName: "config") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.552500 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.592789 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" (UID: "a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.627727 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.627781 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.627899 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.627917 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:02 crc kubenswrapper[4741]: I0226 08:42:02.787966 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:42:02 crc kubenswrapper[4741]: E0226 08:42:02.788305 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.140008 4741 generic.go:334] "Generic (PLEG): container finished" podID="38f236a3-4736-4131-8e96-130f6aede3f2" containerID="b2e85d697f3a99eef78a8b5d66dc1f55e27c535d2cd1b9dc4419df371a4d6547" exitCode=1 Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.145935 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" event={"ID":"38f236a3-4736-4131-8e96-130f6aede3f2","Type":"ContainerDied","Data":"b2e85d697f3a99eef78a8b5d66dc1f55e27c535d2cd1b9dc4419df371a4d6547"} Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.147675 4741 scope.go:117] "RemoveContainer" containerID="b2e85d697f3a99eef78a8b5d66dc1f55e27c535d2cd1b9dc4419df371a4d6547" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.186435 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" event={"ID":"a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2","Type":"ContainerDied","Data":"78cb00dd105fe3696106b8dbdd5cbf3017016f1389ead9fd56b27ed9112d129a"} Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.186547 4741 scope.go:117] "RemoveContainer" containerID="0efab2efc81d97c047ef4217bb3aa5bcdc0dd56b46133d9909f14eb8b5e911a5" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.186788 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-n2qqk" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.212424 4741 generic.go:334] "Generic (PLEG): container finished" podID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerID="25bd3a27bb4b8057e4efcc182cef0ef8a5409d3d7a11c82f12a1e33e2235d2a5" exitCode=1 Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.213216 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5df9b866b6-8f6jq" event={"ID":"54aa908b-305e-4137-a861-2fac8d3c46aa","Type":"ContainerDied","Data":"25bd3a27bb4b8057e4efcc182cef0ef8a5409d3d7a11c82f12a1e33e2235d2a5"} Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.214827 4741 scope.go:117] "RemoveContainer" containerID="25bd3a27bb4b8057e4efcc182cef0ef8a5409d3d7a11c82f12a1e33e2235d2a5" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.233187 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.367682 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.368341 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5cb7574d9b-7lwbl" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" containerID="cri-o://78181bc545e58de3c5e121c4b65279bf6cafd4e8b27f0c1a6c91d63b3a7c161c" gracePeriod=60 Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.410377 4741 scope.go:117] "RemoveContainer" containerID="6609cb7cf633f5daf733df4282c1d5f5966a329417f2fc3454374dec130e926a" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.417462 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.446545 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:42:03 crc kubenswrapper[4741]: E0226 08:42:03.453454 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="dnsmasq-dns" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.454138 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="dnsmasq-dns" Feb 26 08:42:03 crc kubenswrapper[4741]: E0226 08:42:03.454213 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="init" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.454223 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="init" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.485313 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" containerName="dnsmasq-dns" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.498484 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.502064 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.503533 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.554189 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-n2qqk"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.599563 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629472 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629678 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629732 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs22z\" (UniqueName: \"kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629841 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629875 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.629910 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.649248 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.649489 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" containerID="cri-o://4ba1cecc1f47aa6cb49aa2e08ef9178d0be835fc899c87cc09188bcea039bdad" gracePeriod=60 Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.691181 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.702963 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.707791 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.711058 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.738689 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.738772 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.738840 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.739152 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.739452 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.739502 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs22z\" (UniqueName: \"kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.770651 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.770929 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.775524 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.778171 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs22z\" (UniqueName: \"kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.778733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.781849 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.808774 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom\") pod \"heat-api-64468c668c-bhzvw\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.845490 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2" path="/var/lib/kubelet/pods/a59cd82f-1c22-4f1d-b5f3-0b6ea61dfed2/volumes" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847133 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847196 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847275 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847337 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847421 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t92m\" (UniqueName: \"kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.847524 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.958616 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.958692 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.958888 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.959080 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.959456 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t92m\" (UniqueName: \"kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.959819 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.967186 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.972307 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.980256 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.982539 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.983469 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.987658 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:03 crc kubenswrapper[4741]: I0226 08:42:03.987773 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t92m\" (UniqueName: \"kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m\") pod \"heat-cfnapi-875bfc755-9ndh4\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.088051 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.272142 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" event={"ID":"38f236a3-4736-4131-8e96-130f6aede3f2","Type":"ContainerStarted","Data":"284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344"} Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.275728 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.278423 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.278468 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.285908 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerStarted","Data":"18b4ee99348dae62b079916adb6704bf2569489306dbfbe47c6b4cc719c79cf3"} Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.287284 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.313355 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" event={"ID":"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75","Type":"ContainerStarted","Data":"ae413794ff8c88e06a2bde687aafdc01070a102ce9c070115daa7ebaedb02330"} Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.373300 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.381703 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" podStartSLOduration=5.381672605 podStartE2EDuration="5.381672605s" podCreationTimestamp="2026-02-26 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:04.33085873 +0000 UTC m=+1759.326796117" watchObservedRunningTime="2026-02-26 08:42:04.381672605 +0000 UTC m=+1759.377609992" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.382682 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" podStartSLOduration=3.381940799 podStartE2EDuration="4.382676244s" podCreationTimestamp="2026-02-26 08:42:00 +0000 UTC" firstStartedPulling="2026-02-26 08:42:01.81728047 +0000 UTC m=+1756.813217857" lastFinishedPulling="2026-02-26 08:42:02.818015915 +0000 UTC m=+1757.813953302" observedRunningTime="2026-02-26 08:42:04.363295843 +0000 UTC m=+1759.359233230" watchObservedRunningTime="2026-02-26 08:42:04.382676244 +0000 UTC m=+1759.378613631" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.394739 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.435282 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.452348 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1cfca6c-9dec-48b7-a390-17450189e9bb","Type":"ContainerStarted","Data":"80bb1ce580cc2e35b4fe6529ce56b21d15428b589e84bef007cb95b1d430008c"} Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.537548 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.979623505 podStartE2EDuration="9.537517737s" podCreationTimestamp="2026-02-26 08:41:55 +0000 UTC" firstStartedPulling="2026-02-26 08:41:57.852150986 +0000 UTC m=+1752.848088373" lastFinishedPulling="2026-02-26 08:42:03.410045218 +0000 UTC m=+1758.405982605" observedRunningTime="2026-02-26 08:42:04.432550892 +0000 UTC m=+1759.428488279" watchObservedRunningTime="2026-02-26 08:42:04.537517737 +0000 UTC m=+1759.533455124" Feb 26 08:42:04 crc kubenswrapper[4741]: I0226 08:42:04.574084 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.087540 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.215755 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.489506 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.557491 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875bfc755-9ndh4" event={"ID":"47290f7b-69ba-42b3-88c8-cfd13d6009ae","Type":"ContainerStarted","Data":"2695a1e45fa7eec2e29e9ef76cb7b702097909a15d6549ce1fe8878d1185fd64"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.560787 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1cfca6c-9dec-48b7-a390-17450189e9bb","Type":"ContainerStarted","Data":"1e4d90471e0877c694b9e84afbc21935ebae337f5b5c5d35053e1363e829c113"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.575260 4741 generic.go:334] "Generic (PLEG): container finished" podID="38f236a3-4736-4131-8e96-130f6aede3f2" containerID="284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344" exitCode=1 Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.575417 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" event={"ID":"38f236a3-4736-4131-8e96-130f6aede3f2","Type":"ContainerDied","Data":"284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.575534 4741 scope.go:117] "RemoveContainer" containerID="b2e85d697f3a99eef78a8b5d66dc1f55e27c535d2cd1b9dc4419df371a4d6547" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.577002 4741 scope.go:117] "RemoveContainer" containerID="284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344" Feb 26 08:42:05 crc kubenswrapper[4741]: E0226 08:42:05.577462 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6b8d584bd4-g6p2l_openstack(38f236a3-4736-4131-8e96-130f6aede3f2)\"" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.601934 4741 generic.go:334] "Generic (PLEG): container finished" podID="0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" containerID="ae413794ff8c88e06a2bde687aafdc01070a102ce9c070115daa7ebaedb02330" exitCode=0 Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.602039 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" event={"ID":"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75","Type":"ContainerDied","Data":"ae413794ff8c88e06a2bde687aafdc01070a102ce9c070115daa7ebaedb02330"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.614781 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64468c668c-bhzvw" event={"ID":"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a","Type":"ContainerStarted","Data":"37b9586378ba8c814f40327b0fec98b0e24dfb98155e969ecdbd858a8d752b80"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.657465 4741 generic.go:334] "Generic (PLEG): container finished" podID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerID="18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7" exitCode=1 Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.659024 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5df9b866b6-8f6jq" event={"ID":"54aa908b-305e-4137-a861-2fac8d3c46aa","Type":"ContainerDied","Data":"18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7"} Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.660296 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.660610 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.666569 4741 scope.go:117] "RemoveContainer" containerID="18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7" Feb 26 08:42:05 crc kubenswrapper[4741]: E0226 08:42:05.671734 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5df9b866b6-8f6jq_openstack(54aa908b-305e-4137-a861-2fac8d3c46aa)\"" pod="openstack/heat-api-5df9b866b6-8f6jq" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" Feb 26 08:42:05 crc kubenswrapper[4741]: I0226 08:42:05.760323 4741 scope.go:117] "RemoveContainer" containerID="25bd3a27bb4b8057e4efcc182cef0ef8a5409d3d7a11c82f12a1e33e2235d2a5" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.700078 4741 scope.go:117] "RemoveContainer" containerID="18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7" Feb 26 08:42:06 crc kubenswrapper[4741]: E0226 08:42:06.702147 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5df9b866b6-8f6jq_openstack(54aa908b-305e-4137-a861-2fac8d3c46aa)\"" pod="openstack/heat-api-5df9b866b6-8f6jq" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.712305 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875bfc755-9ndh4" event={"ID":"47290f7b-69ba-42b3-88c8-cfd13d6009ae","Type":"ContainerStarted","Data":"16d8083239d78c9048e36a510d693abd7cde5e02f5669c0a3c01d5dbe0d28a06"} Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.714671 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.722069 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1cfca6c-9dec-48b7-a390-17450189e9bb","Type":"ContainerStarted","Data":"3b69aacc2b77f11b0c3584a57b7dd8a734b2cbe9781448b67dca52a2c307105e"} Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.749440 4741 scope.go:117] "RemoveContainer" containerID="284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344" Feb 26 08:42:06 crc kubenswrapper[4741]: E0226 08:42:06.759935 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6b8d584bd4-g6p2l_openstack(38f236a3-4736-4131-8e96-130f6aede3f2)\"" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.772303 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.772282059 podStartE2EDuration="5.772282059s" podCreationTimestamp="2026-02-26 08:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:06.768982036 +0000 UTC m=+1761.764919433" watchObservedRunningTime="2026-02-26 08:42:06.772282059 +0000 UTC m=+1761.768219446" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.776577 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64468c668c-bhzvw" event={"ID":"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a","Type":"ContainerStarted","Data":"b9e8eae684854b645369f5bf2287914c87874e87ec849e5b30254b7ae7b0c11d"} Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.776643 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.781079 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-central-agent" containerID="cri-o://6cde56754bc3729ade3971a555170a3c1556cad73f23a985e9b6b902313b3484" gracePeriod=30 Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.781295 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="proxy-httpd" containerID="cri-o://18b4ee99348dae62b079916adb6704bf2569489306dbfbe47c6b4cc719c79cf3" gracePeriod=30 Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.781340 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="sg-core" containerID="cri-o://4d8a085b66a5e51fce4ece5736e031d65acf2d8fd82ff40afbb093e0006df4f7" gracePeriod=30 Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.781378 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-notification-agent" containerID="cri-o://860a4e1390092d0be3c5e1e9988138441cff65dbe008d5ef765d8c9d68948d5e" gracePeriod=30 Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.875882 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="d83c4a22-8843-4882-9c41-0a5c11ba9dff" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.224:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:42:06 crc kubenswrapper[4741]: I0226 08:42:06.931615 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-875bfc755-9ndh4" podStartSLOduration=3.931593619 podStartE2EDuration="3.931593619s" podCreationTimestamp="2026-02-26 08:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:06.903017807 +0000 UTC m=+1761.898955194" watchObservedRunningTime="2026-02-26 08:42:06.931593619 +0000 UTC m=+1761.927531006" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.013708 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64468c668c-bhzvw" podStartSLOduration=4.013675673 podStartE2EDuration="4.013675673s" podCreationTimestamp="2026-02-26 08:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:07.006394766 +0000 UTC m=+1762.002332153" watchObservedRunningTime="2026-02-26 08:42:07.013675673 +0000 UTC m=+1762.009613060" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.458258 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.621910 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.667472 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.689185 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85l9\" (UniqueName: \"kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9\") pod \"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75\" (UID: \"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75\") " Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.714295 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9" (OuterVolumeSpecName: "kube-api-access-b85l9") pod "0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" (UID: "0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75"). InnerVolumeSpecName "kube-api-access-b85l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.795852 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85l9\" (UniqueName: \"kubernetes.io/projected/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75-kube-api-access-b85l9\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841678 4741 generic.go:334] "Generic (PLEG): container finished" podID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerID="18b4ee99348dae62b079916adb6704bf2569489306dbfbe47c6b4cc719c79cf3" exitCode=0 Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841715 4741 generic.go:334] "Generic (PLEG): container finished" podID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerID="4d8a085b66a5e51fce4ece5736e031d65acf2d8fd82ff40afbb093e0006df4f7" exitCode=2 Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841726 4741 generic.go:334] "Generic (PLEG): container finished" podID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerID="860a4e1390092d0be3c5e1e9988138441cff65dbe008d5ef765d8c9d68948d5e" exitCode=0 Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841769 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerDied","Data":"18b4ee99348dae62b079916adb6704bf2569489306dbfbe47c6b4cc719c79cf3"} Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841802 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerDied","Data":"4d8a085b66a5e51fce4ece5736e031d65acf2d8fd82ff40afbb093e0006df4f7"} Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.841811 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerDied","Data":"860a4e1390092d0be3c5e1e9988138441cff65dbe008d5ef765d8c9d68948d5e"} Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.847514 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" event={"ID":"0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75","Type":"ContainerDied","Data":"a06a72b3ba646b3024632b0fea886bc887e76ed9ab02dd007e8cd41220dcfcc9"} Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.847572 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06a72b3ba646b3024632b0fea886bc887e76ed9ab02dd007e8cd41220dcfcc9" Feb 26 08:42:07 crc kubenswrapper[4741]: I0226 08:42:07.847661 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534922-4g5sp" Feb 26 08:42:08 crc kubenswrapper[4741]: I0226 08:42:08.728538 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534916-hwf5d"] Feb 26 08:42:08 crc kubenswrapper[4741]: I0226 08:42:08.738956 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534916-hwf5d"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.122051 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-z9pg7"] Feb 26 08:42:09 crc kubenswrapper[4741]: E0226 08:42:09.122658 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" containerName="oc" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.122676 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" containerName="oc" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.122931 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" containerName="oc" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.123867 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.151516 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z9pg7"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.222357 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.222878 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.253053 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsvb\" (UniqueName: \"kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.253258 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.275606 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.282053 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2ljhv"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.294633 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.317431 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-cca3-account-create-update-tklhs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.320159 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.328977 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.353802 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2ljhv"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.363443 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.363659 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.364213 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jn6n\" (UniqueName: \"kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.364784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsvb\" (UniqueName: \"kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.369926 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.375907 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cca3-account-create-update-tklhs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.421352 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsvb\" (UniqueName: \"kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb\") pod \"nova-api-db-create-z9pg7\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.468071 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.468545 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng94k\" (UniqueName: \"kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.469032 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jn6n\" (UniqueName: \"kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.469415 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.470926 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.483325 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.521500 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0bee-account-create-update-d8ghs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.522137 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jn6n\" (UniqueName: \"kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n\") pod \"nova-cell0-db-create-2ljhv\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.523945 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.526833 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.556425 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0bee-account-create-update-d8ghs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.571647 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2sbbs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.573722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzdl\" (UniqueName: \"kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.574080 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.574411 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.574608 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng94k\" (UniqueName: \"kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.576344 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.577240 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.604947 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng94k\" (UniqueName: \"kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k\") pod \"nova-api-cca3-account-create-update-tklhs\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.629207 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2sbbs"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.630152 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.663163 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.679718 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.679813 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3fe0-account-create-update-mwptz"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.680085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.680489 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn962\" (UniqueName: \"kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.681007 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzdl\" (UniqueName: \"kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.681688 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.684575 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.696936 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fe0-account-create-update-mwptz"] Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.711037 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.714265 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzdl\" (UniqueName: \"kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl\") pod \"nova-cell0-0bee-account-create-update-d8ghs\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.784244 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.784661 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf7nf\" (UniqueName: \"kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.784788 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn962\" (UniqueName: \"kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.784917 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.786905 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.805527 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.824053 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn962\" (UniqueName: \"kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962\") pod \"nova-cell1-db-create-2sbbs\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.847686 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.863912 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f5b396-e468-4a73-b1e3-258af5766c4c" path="/var/lib/kubelet/pods/e7f5b396-e468-4a73-b1e3-258af5766c4c/volumes" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.891313 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.892550 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.893578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf7nf\" (UniqueName: \"kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:09 crc kubenswrapper[4741]: I0226 08:42:09.958912 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf7nf\" (UniqueName: \"kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf\") pod \"nova-cell1-3fe0-account-create-update-mwptz\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.012308 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.013367 4741 scope.go:117] "RemoveContainer" containerID="284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344" Feb 26 08:42:10 crc kubenswrapper[4741]: E0226 08:42:10.013634 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6b8d584bd4-g6p2l_openstack(38f236a3-4736-4131-8e96-130f6aede3f2)\"" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.116270 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.117974 4741 scope.go:117] "RemoveContainer" containerID="18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7" Feb 26 08:42:10 crc kubenswrapper[4741]: E0226 08:42:10.118299 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5df9b866b6-8f6jq_openstack(54aa908b-305e-4137-a861-2fac8d3c46aa)\"" pod="openstack/heat-api-5df9b866b6-8f6jq" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.174663 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.272573 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z9pg7"] Feb 26 08:42:10 crc kubenswrapper[4741]: W0226 08:42:10.280743 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb461e3b4_2bbe_4870_b388_b6235c3c0a22.slice/crio-7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba WatchSource:0}: Error finding container 7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba: Status 404 returned error can't find the container with id 7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.493628 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cca3-account-create-update-tklhs"] Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.632919 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2ljhv"] Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.762160 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0bee-account-create-update-d8ghs"] Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.888608 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5cb7574d9b-7lwbl" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.228:8004/healthcheck\": read tcp 10.217.0.2:57500->10.217.0.228:8004: read: connection reset by peer" Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.913823 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2sbbs"] Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.954747 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" event={"ID":"fa90358f-3b47-4d6f-a363-9399e7472b60","Type":"ContainerStarted","Data":"a29cce45e712fae374bd2deccafdaad12566f7491b5fadc44c86bb5d5396d680"} Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.969900 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z9pg7" event={"ID":"b461e3b4-2bbe-4870-b388-b6235c3c0a22","Type":"ContainerStarted","Data":"7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba"} Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.989852 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cca3-account-create-update-tklhs" event={"ID":"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6","Type":"ContainerStarted","Data":"eff60f191df7951328f2dadb92121a0ca0f2baf984b0e8f2fbe8aae552b8e785"} Feb 26 08:42:10 crc kubenswrapper[4741]: I0226 08:42:10.994433 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ljhv" event={"ID":"07a81300-f955-4bce-9a32-e60e7a391588","Type":"ContainerStarted","Data":"86f702fe3bf60e654b6573ef2a0eb69ccf012b49e28a6697183147644de1312a"} Feb 26 08:42:11 crc kubenswrapper[4741]: I0226 08:42:11.015778 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:42:11 crc kubenswrapper[4741]: I0226 08:42:11.046370 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fe0-account-create-update-mwptz"] Feb 26 08:42:11 crc kubenswrapper[4741]: I0226 08:42:11.056045 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": read tcp 10.217.0.2:32824->10.217.0.227:8000: read: connection reset by peer" Feb 26 08:42:11 crc kubenswrapper[4741]: W0226 08:42:11.124591 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4218990d_060f_4a94_8f4c_980bb124cfc8.slice/crio-e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e WatchSource:0}: Error finding container e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e: Status 404 returned error can't find the container with id e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e Feb 26 08:42:11 crc kubenswrapper[4741]: I0226 08:42:11.146691 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": dial tcp 10.217.0.227:8000: connect: connection refused" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.023061 4741 generic.go:334] "Generic (PLEG): container finished" podID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerID="4ba1cecc1f47aa6cb49aa2e08ef9178d0be835fc899c87cc09188bcea039bdad" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.023181 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" event={"ID":"8013599d-c0ae-43ba-ae68-bbecc6acfa6b","Type":"ContainerDied","Data":"4ba1cecc1f47aa6cb49aa2e08ef9178d0be835fc899c87cc09188bcea039bdad"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.023539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" event={"ID":"8013599d-c0ae-43ba-ae68-bbecc6acfa6b","Type":"ContainerDied","Data":"37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.023561 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f760d96a270ae6ef6babc4b331c4dd539dc54a23d15e8698e015a1b4a74ba8" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.026820 4741 generic.go:334] "Generic (PLEG): container finished" podID="1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" containerID="f6e30e4bca77738a24da45ae16cc07c59725f9e44aad36863fd5211b0bff2b29" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.026957 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cca3-account-create-update-tklhs" event={"ID":"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6","Type":"ContainerDied","Data":"f6e30e4bca77738a24da45ae16cc07c59725f9e44aad36863fd5211b0bff2b29"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.039671 4741 generic.go:334] "Generic (PLEG): container finished" podID="4218990d-060f-4a94-8f4c-980bb124cfc8" containerID="7ceb6bd914989c1e1c6422103aa9d7f4e41b5d1dc3dfc6ecb4abf3a10930a92a" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.039689 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" event={"ID":"4218990d-060f-4a94-8f4c-980bb124cfc8","Type":"ContainerDied","Data":"7ceb6bd914989c1e1c6422103aa9d7f4e41b5d1dc3dfc6ecb4abf3a10930a92a"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.039886 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" event={"ID":"4218990d-060f-4a94-8f4c-980bb124cfc8","Type":"ContainerStarted","Data":"e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.043465 4741 generic.go:334] "Generic (PLEG): container finished" podID="07a81300-f955-4bce-9a32-e60e7a391588" containerID="4e71fa392aa862d748993198d27b098a9bd27b7900b08992431c6646099005c9" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.043534 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ljhv" event={"ID":"07a81300-f955-4bce-9a32-e60e7a391588","Type":"ContainerDied","Data":"4e71fa392aa862d748993198d27b098a9bd27b7900b08992431c6646099005c9"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.054356 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa90358f-3b47-4d6f-a363-9399e7472b60" containerID="032f8a00d3f9d8215aafc1dc7034d5b33f6cf2b27a388ae3b54ee4216efcda61" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.054592 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" event={"ID":"fa90358f-3b47-4d6f-a363-9399e7472b60","Type":"ContainerDied","Data":"032f8a00d3f9d8215aafc1dc7034d5b33f6cf2b27a388ae3b54ee4216efcda61"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.061867 4741 generic.go:334] "Generic (PLEG): container finished" podID="f3252251-2856-49ae-954d-ad40716b99e8" containerID="78181bc545e58de3c5e121c4b65279bf6cafd4e8b27f0c1a6c91d63b3a7c161c" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.061946 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cb7574d9b-7lwbl" event={"ID":"f3252251-2856-49ae-954d-ad40716b99e8","Type":"ContainerDied","Data":"78181bc545e58de3c5e121c4b65279bf6cafd4e8b27f0c1a6c91d63b3a7c161c"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.061983 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cb7574d9b-7lwbl" event={"ID":"f3252251-2856-49ae-954d-ad40716b99e8","Type":"ContainerDied","Data":"fd18f130fa54871c909ca28c93cbcaea60c94a1e22e684131e9a1ea77a4850a1"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.061995 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd18f130fa54871c909ca28c93cbcaea60c94a1e22e684131e9a1ea77a4850a1" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.063405 4741 generic.go:334] "Generic (PLEG): container finished" podID="b461e3b4-2bbe-4870-b388-b6235c3c0a22" containerID="a78128872fc1697606acff0159f49ac8ac50a0a78b9eac8850456253f9553607" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.063454 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z9pg7" event={"ID":"b461e3b4-2bbe-4870-b388-b6235c3c0a22","Type":"ContainerDied","Data":"a78128872fc1697606acff0159f49ac8ac50a0a78b9eac8850456253f9553607"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.071372 4741 generic.go:334] "Generic (PLEG): container finished" podID="13a73675-9d8d-447a-ad06-b626a8016195" containerID="d2508e3043d422c51dad3287a50dcb5f2e3fd5949676282f80de18f77a47bac9" exitCode=0 Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.071445 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2sbbs" event={"ID":"13a73675-9d8d-447a-ad06-b626a8016195","Type":"ContainerDied","Data":"d2508e3043d422c51dad3287a50dcb5f2e3fd5949676282f80de18f77a47bac9"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.071482 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2sbbs" event={"ID":"13a73675-9d8d-447a-ad06-b626a8016195","Type":"ContainerStarted","Data":"bc833a5590da28b46dc06aea87cc95fe1ee87bb8e3d44766b40ce49912515bf8"} Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.138364 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.148502 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.240928 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data\") pod \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.243371 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdqtv\" (UniqueName: \"kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv\") pod \"f3252251-2856-49ae-954d-ad40716b99e8\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.243498 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data\") pod \"f3252251-2856-49ae-954d-ad40716b99e8\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.243802 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom\") pod \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.243910 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom\") pod \"f3252251-2856-49ae-954d-ad40716b99e8\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.243976 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkr5\" (UniqueName: \"kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5\") pod \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.244061 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle\") pod \"f3252251-2856-49ae-954d-ad40716b99e8\" (UID: \"f3252251-2856-49ae-954d-ad40716b99e8\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.244122 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle\") pod \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\" (UID: \"8013599d-c0ae-43ba-ae68-bbecc6acfa6b\") " Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.270914 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8013599d-c0ae-43ba-ae68-bbecc6acfa6b" (UID: "8013599d-c0ae-43ba-ae68-bbecc6acfa6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.271723 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv" (OuterVolumeSpecName: "kube-api-access-xdqtv") pod "f3252251-2856-49ae-954d-ad40716b99e8" (UID: "f3252251-2856-49ae-954d-ad40716b99e8"). InnerVolumeSpecName "kube-api-access-xdqtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.271993 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5" (OuterVolumeSpecName: "kube-api-access-dbkr5") pod "8013599d-c0ae-43ba-ae68-bbecc6acfa6b" (UID: "8013599d-c0ae-43ba-ae68-bbecc6acfa6b"). InnerVolumeSpecName "kube-api-access-dbkr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.277442 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3252251-2856-49ae-954d-ad40716b99e8" (UID: "f3252251-2856-49ae-954d-ad40716b99e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.301566 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3252251-2856-49ae-954d-ad40716b99e8" (UID: "f3252251-2856-49ae-954d-ad40716b99e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.308258 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.308677 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.358250 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.358283 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.358293 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkr5\" (UniqueName: \"kubernetes.io/projected/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-kube-api-access-dbkr5\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.358305 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.358315 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdqtv\" (UniqueName: \"kubernetes.io/projected/f3252251-2856-49ae-954d-ad40716b99e8-kube-api-access-xdqtv\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.380666 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.385638 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.408272 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8013599d-c0ae-43ba-ae68-bbecc6acfa6b" (UID: "8013599d-c0ae-43ba-ae68-bbecc6acfa6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.431315 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data" (OuterVolumeSpecName: "config-data") pod "8013599d-c0ae-43ba-ae68-bbecc6acfa6b" (UID: "8013599d-c0ae-43ba-ae68-bbecc6acfa6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.442239 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data" (OuterVolumeSpecName: "config-data") pod "f3252251-2856-49ae-954d-ad40716b99e8" (UID: "f3252251-2856-49ae-954d-ad40716b99e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.462161 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.462210 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3252251-2856-49ae-954d-ad40716b99e8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:12 crc kubenswrapper[4741]: I0226 08:42:12.462223 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8013599d-c0ae-43ba-ae68-bbecc6acfa6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.106699 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cb7574d9b-7lwbl" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.108587 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8994dd55-drsrx" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.109547 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.109578 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.255345 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.304251 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5cb7574d9b-7lwbl"] Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.378213 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.409271 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b8994dd55-drsrx"] Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.708335 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.808247 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng94k\" (UniqueName: \"kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k\") pod \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.808521 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts\") pod \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\" (UID: \"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6\") " Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.809725 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" path="/var/lib/kubelet/pods/8013599d-c0ae-43ba-ae68-bbecc6acfa6b/volumes" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.810834 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3252251-2856-49ae-954d-ad40716b99e8" path="/var/lib/kubelet/pods/f3252251-2856-49ae-954d-ad40716b99e8/volumes" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.810829 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" (UID: "1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.819669 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k" (OuterVolumeSpecName: "kube-api-access-ng94k") pod "1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" (UID: "1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6"). InnerVolumeSpecName "kube-api-access-ng94k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.912912 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng94k\" (UniqueName: \"kubernetes.io/projected/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-kube-api-access-ng94k\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:13 crc kubenswrapper[4741]: I0226 08:42:13.912949 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.098171 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.146901 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.146999 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cca3-account-create-update-tklhs" event={"ID":"1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6","Type":"ContainerDied","Data":"eff60f191df7951328f2dadb92121a0ca0f2baf984b0e8f2fbe8aae552b8e785"} Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.147063 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff60f191df7951328f2dadb92121a0ca0f2baf984b0e8f2fbe8aae552b8e785" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.147003 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cca3-account-create-update-tklhs" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.160178 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" event={"ID":"fa90358f-3b47-4d6f-a363-9399e7472b60","Type":"ContainerDied","Data":"a29cce45e712fae374bd2deccafdaad12566f7491b5fadc44c86bb5d5396d680"} Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.160582 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29cce45e712fae374bd2deccafdaad12566f7491b5fadc44c86bb5d5396d680" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.160677 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bee-account-create-update-d8ghs" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.177429 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z9pg7" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.177514 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z9pg7" event={"ID":"b461e3b4-2bbe-4870-b388-b6235c3c0a22","Type":"ContainerDied","Data":"7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba"} Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.177570 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8cd420393a3808e7752353dc1dfcb4bc4b9a3368344cbaeb5370017c764bba" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.222804 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts\") pod \"fa90358f-3b47-4d6f-a363-9399e7472b60\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.222894 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wzdl\" (UniqueName: \"kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl\") pod \"fa90358f-3b47-4d6f-a363-9399e7472b60\" (UID: \"fa90358f-3b47-4d6f-a363-9399e7472b60\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.223696 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa90358f-3b47-4d6f-a363-9399e7472b60" (UID: "fa90358f-3b47-4d6f-a363-9399e7472b60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.224394 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa90358f-3b47-4d6f-a363-9399e7472b60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.233139 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl" (OuterVolumeSpecName: "kube-api-access-8wzdl") pod "fa90358f-3b47-4d6f-a363-9399e7472b60" (UID: "fa90358f-3b47-4d6f-a363-9399e7472b60"). InnerVolumeSpecName "kube-api-access-8wzdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.327217 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts\") pod \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.327574 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsvb\" (UniqueName: \"kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb\") pod \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\" (UID: \"b461e3b4-2bbe-4870-b388-b6235c3c0a22\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.328457 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wzdl\" (UniqueName: \"kubernetes.io/projected/fa90358f-3b47-4d6f-a363-9399e7472b60-kube-api-access-8wzdl\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.336599 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b461e3b4-2bbe-4870-b388-b6235c3c0a22" (UID: "b461e3b4-2bbe-4870-b388-b6235c3c0a22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.338433 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb" (OuterVolumeSpecName: "kube-api-access-5gsvb") pod "b461e3b4-2bbe-4870-b388-b6235c3c0a22" (UID: "b461e3b4-2bbe-4870-b388-b6235c3c0a22"). InnerVolumeSpecName "kube-api-access-5gsvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.350184 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.429459 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.434137 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts\") pod \"13a73675-9d8d-447a-ad06-b626a8016195\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.434356 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn962\" (UniqueName: \"kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962\") pod \"13a73675-9d8d-447a-ad06-b626a8016195\" (UID: \"13a73675-9d8d-447a-ad06-b626a8016195\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.440677 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13a73675-9d8d-447a-ad06-b626a8016195" (UID: "13a73675-9d8d-447a-ad06-b626a8016195"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.443412 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.443845 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962" (OuterVolumeSpecName: "kube-api-access-nn962") pod "13a73675-9d8d-447a-ad06-b626a8016195" (UID: "13a73675-9d8d-447a-ad06-b626a8016195"). InnerVolumeSpecName "kube-api-access-nn962". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.456281 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn962\" (UniqueName: \"kubernetes.io/projected/13a73675-9d8d-447a-ad06-b626a8016195-kube-api-access-nn962\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.456337 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsvb\" (UniqueName: \"kubernetes.io/projected/b461e3b4-2bbe-4870-b388-b6235c3c0a22-kube-api-access-5gsvb\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.456351 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b461e3b4-2bbe-4870-b388-b6235c3c0a22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.456371 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a73675-9d8d-447a-ad06-b626a8016195-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.561685 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts\") pod \"07a81300-f955-4bce-9a32-e60e7a391588\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.562160 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf7nf\" (UniqueName: \"kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf\") pod \"4218990d-060f-4a94-8f4c-980bb124cfc8\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.562295 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07a81300-f955-4bce-9a32-e60e7a391588" (UID: "07a81300-f955-4bce-9a32-e60e7a391588"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.562423 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jn6n\" (UniqueName: \"kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n\") pod \"07a81300-f955-4bce-9a32-e60e7a391588\" (UID: \"07a81300-f955-4bce-9a32-e60e7a391588\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.562519 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts\") pod \"4218990d-060f-4a94-8f4c-980bb124cfc8\" (UID: \"4218990d-060f-4a94-8f4c-980bb124cfc8\") " Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.563389 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a81300-f955-4bce-9a32-e60e7a391588-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.571236 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n" (OuterVolumeSpecName: "kube-api-access-5jn6n") pod "07a81300-f955-4bce-9a32-e60e7a391588" (UID: "07a81300-f955-4bce-9a32-e60e7a391588"). InnerVolumeSpecName "kube-api-access-5jn6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.571577 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4218990d-060f-4a94-8f4c-980bb124cfc8" (UID: "4218990d-060f-4a94-8f4c-980bb124cfc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.574365 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf" (OuterVolumeSpecName: "kube-api-access-hf7nf") pod "4218990d-060f-4a94-8f4c-980bb124cfc8" (UID: "4218990d-060f-4a94-8f4c-980bb124cfc8"). InnerVolumeSpecName "kube-api-access-hf7nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.666622 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf7nf\" (UniqueName: \"kubernetes.io/projected/4218990d-060f-4a94-8f4c-980bb124cfc8-kube-api-access-hf7nf\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.666673 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jn6n\" (UniqueName: \"kubernetes.io/projected/07a81300-f955-4bce-9a32-e60e7a391588-kube-api-access-5jn6n\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:14 crc kubenswrapper[4741]: I0226 08:42:14.666685 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4218990d-060f-4a94-8f4c-980bb124cfc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.196904 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2sbbs" event={"ID":"13a73675-9d8d-447a-ad06-b626a8016195","Type":"ContainerDied","Data":"bc833a5590da28b46dc06aea87cc95fe1ee87bb8e3d44766b40ce49912515bf8"} Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.196958 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc833a5590da28b46dc06aea87cc95fe1ee87bb8e3d44766b40ce49912515bf8" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.197077 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2sbbs" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.202393 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" event={"ID":"4218990d-060f-4a94-8f4c-980bb124cfc8","Type":"ContainerDied","Data":"e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e"} Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.202429 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fe0-account-create-update-mwptz" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.202449 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f4e8c166b12ef5972992c1feed87ffe66ba146238a026fb86da1e32e72d71e" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.204938 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ljhv" event={"ID":"07a81300-f955-4bce-9a32-e60e7a391588","Type":"ContainerDied","Data":"86f702fe3bf60e654b6573ef2a0eb69ccf012b49e28a6697183147644de1312a"} Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.204993 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f702fe3bf60e654b6573ef2a0eb69ccf012b49e28a6697183147644de1312a" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.205091 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ljhv" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.727083 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.847097 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:42:15 crc kubenswrapper[4741]: I0226 08:42:15.990556 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.084644 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.084773 4741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.113175 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.528495 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.549766 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.643585 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data\") pod \"38f236a3-4736-4131-8e96-130f6aede3f2\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.643825 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle\") pod \"38f236a3-4736-4131-8e96-130f6aede3f2\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.643891 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom\") pod \"38f236a3-4736-4131-8e96-130f6aede3f2\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.643931 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9n6g\" (UniqueName: \"kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g\") pod \"38f236a3-4736-4131-8e96-130f6aede3f2\" (UID: \"38f236a3-4736-4131-8e96-130f6aede3f2\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.668293 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38f236a3-4736-4131-8e96-130f6aede3f2" (UID: "38f236a3-4736-4131-8e96-130f6aede3f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.675328 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g" (OuterVolumeSpecName: "kube-api-access-j9n6g") pod "38f236a3-4736-4131-8e96-130f6aede3f2" (UID: "38f236a3-4736-4131-8e96-130f6aede3f2"). InnerVolumeSpecName "kube-api-access-j9n6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.726703 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38f236a3-4736-4131-8e96-130f6aede3f2" (UID: "38f236a3-4736-4131-8e96-130f6aede3f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.747480 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.748575 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.748617 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.748633 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9n6g\" (UniqueName: \"kubernetes.io/projected/38f236a3-4736-4131-8e96-130f6aede3f2-kube-api-access-j9n6g\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.774914 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data" (OuterVolumeSpecName: "config-data") pod "38f236a3-4736-4131-8e96-130f6aede3f2" (UID: "38f236a3-4736-4131-8e96-130f6aede3f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.788506 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:42:16 crc kubenswrapper[4741]: E0226 08:42:16.790721 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.850655 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom\") pod \"54aa908b-305e-4137-a861-2fac8d3c46aa\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.850745 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle\") pod \"54aa908b-305e-4137-a861-2fac8d3c46aa\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.850999 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd2sw\" (UniqueName: \"kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw\") pod \"54aa908b-305e-4137-a861-2fac8d3c46aa\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.851246 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data\") pod \"54aa908b-305e-4137-a861-2fac8d3c46aa\" (UID: \"54aa908b-305e-4137-a861-2fac8d3c46aa\") " Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.853138 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f236a3-4736-4131-8e96-130f6aede3f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.855715 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw" (OuterVolumeSpecName: "kube-api-access-bd2sw") pod "54aa908b-305e-4137-a861-2fac8d3c46aa" (UID: "54aa908b-305e-4137-a861-2fac8d3c46aa"). InnerVolumeSpecName "kube-api-access-bd2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.855901 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54aa908b-305e-4137-a861-2fac8d3c46aa" (UID: "54aa908b-305e-4137-a861-2fac8d3c46aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.892503 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54aa908b-305e-4137-a861-2fac8d3c46aa" (UID: "54aa908b-305e-4137-a861-2fac8d3c46aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.923467 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data" (OuterVolumeSpecName: "config-data") pod "54aa908b-305e-4137-a861-2fac8d3c46aa" (UID: "54aa908b-305e-4137-a861-2fac8d3c46aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.956072 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.956125 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.956140 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa908b-305e-4137-a861-2fac8d3c46aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:16 crc kubenswrapper[4741]: I0226 08:42:16.956150 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd2sw\" (UniqueName: \"kubernetes.io/projected/54aa908b-305e-4137-a861-2fac8d3c46aa-kube-api-access-bd2sw\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.241496 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" event={"ID":"38f236a3-4736-4131-8e96-130f6aede3f2","Type":"ContainerDied","Data":"6c06e7e0c37fa05394d0234156e8fc3dfb50eb277fa7ddae1b6febc8a6f00540"} Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.241563 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8d584bd4-g6p2l" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.241638 4741 scope.go:117] "RemoveContainer" containerID="284d19433c8a1ba0c8f28564ffd9ba72fc8ce03b9d401d2b5bf1dce7ce3a0344" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.244533 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5df9b866b6-8f6jq" event={"ID":"54aa908b-305e-4137-a861-2fac8d3c46aa","Type":"ContainerDied","Data":"b76837132731e4d4910563f9428422758c897045e710924ec0ac618e7149b036"} Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.244576 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5df9b866b6-8f6jq" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.284786 4741 scope.go:117] "RemoveContainer" containerID="18cb5cf2f94cdbb34c28e9ceeecd930af1eebd1540e0d9d546e9e2f2e6b194a7" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.306195 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.317531 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5df9b866b6-8f6jq"] Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.329933 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.346553 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b8d584bd4-g6p2l"] Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.807946 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" path="/var/lib/kubelet/pods/38f236a3-4736-4131-8e96-130f6aede3f2/volumes" Feb 26 08:42:17 crc kubenswrapper[4741]: I0226 08:42:17.808619 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" path="/var/lib/kubelet/pods/54aa908b-305e-4137-a861-2fac8d3c46aa/volumes" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.812588 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ctw5f"] Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813746 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813764 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813785 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a81300-f955-4bce-9a32-e60e7a391588" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813792 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a81300-f955-4bce-9a32-e60e7a391588" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813824 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4218990d-060f-4a94-8f4c-980bb124cfc8" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813831 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4218990d-060f-4a94-8f4c-980bb124cfc8" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813846 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa90358f-3b47-4d6f-a363-9399e7472b60" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813852 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa90358f-3b47-4d6f-a363-9399e7472b60" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813861 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813867 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813910 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813916 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813926 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813933 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813942 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b461e3b4-2bbe-4870-b388-b6235c3c0a22" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813948 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b461e3b4-2bbe-4870-b388-b6235c3c0a22" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.813987 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.813994 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: E0226 08:42:19.814009 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a73675-9d8d-447a-ad06-b626a8016195" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814015 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a73675-9d8d-447a-ad06-b626a8016195" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814325 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814344 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814372 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814383 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8013599d-c0ae-43ba-ae68-bbecc6acfa6b" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814398 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814410 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b461e3b4-2bbe-4870-b388-b6235c3c0a22" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814418 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a81300-f955-4bce-9a32-e60e7a391588" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814441 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814459 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4218990d-060f-4a94-8f4c-980bb124cfc8" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814471 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa90358f-3b47-4d6f-a363-9399e7472b60" containerName="mariadb-account-create-update" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.814479 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a73675-9d8d-447a-ad06-b626a8016195" containerName="mariadb-database-create" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.817995 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.821728 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.821754 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.821930 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9m8cf" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.889930 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ctw5f"] Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.938864 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.939039 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.939600 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfxk\" (UniqueName: \"kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:19 crc kubenswrapper[4741]: I0226 08:42:19.939718 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.042252 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.042750 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.042974 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpfxk\" (UniqueName: \"kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.043009 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.053991 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.055892 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.060041 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.067996 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpfxk\" (UniqueName: \"kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.078264 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ctw5f\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.147245 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.147513 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-55d8cd5998-s5j8z" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" containerID="cri-o://505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" gracePeriod=60 Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.166546 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:20 crc kubenswrapper[4741]: I0226 08:42:20.806073 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ctw5f"] Feb 26 08:42:20 crc kubenswrapper[4741]: E0226 08:42:20.843368 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:20 crc kubenswrapper[4741]: E0226 08:42:20.845254 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:20 crc kubenswrapper[4741]: E0226 08:42:20.847675 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:20 crc kubenswrapper[4741]: E0226 08:42:20.847752 4741 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-55d8cd5998-s5j8z" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.313133 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" event={"ID":"e11a47b6-7849-4cb0-9b25-c0a26225fba2","Type":"ContainerStarted","Data":"55647c89af5f6d49fba79939c8cccb6bdf60526182ef453f6dfa284c0bddeba8"} Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.322475 4741 generic.go:334] "Generic (PLEG): container finished" podID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerID="6cde56754bc3729ade3971a555170a3c1556cad73f23a985e9b6b902313b3484" exitCode=0 Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.322532 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerDied","Data":"6cde56754bc3729ade3971a555170a3c1556cad73f23a985e9b6b902313b3484"} Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.483549 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.508737 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5cb7574d9b-7lwbl" podUID="f3252251-2856-49ae-954d-ad40716b99e8" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.228:8004/healthcheck\": dial tcp 10.217.0.228:8004: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.589664 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.589907 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.590035 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.590062 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.590171 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.590295 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjzn\" (UniqueName: \"kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.591051 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.591171 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd\") pod \"87f31dc5-9567-43f5-a2a8-34115d352a77\" (UID: \"87f31dc5-9567-43f5-a2a8-34115d352a77\") " Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.591888 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.592211 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.618308 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts" (OuterVolumeSpecName: "scripts") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.626440 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn" (OuterVolumeSpecName: "kube-api-access-mmjzn") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "kube-api-access-mmjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.668633 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.696666 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.696707 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjzn\" (UniqueName: \"kubernetes.io/projected/87f31dc5-9567-43f5-a2a8-34115d352a77-kube-api-access-mmjzn\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.696724 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87f31dc5-9567-43f5-a2a8-34115d352a77-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.696741 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.699140 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.798968 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.799194 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data" (OuterVolumeSpecName: "config-data") pod "87f31dc5-9567-43f5-a2a8-34115d352a77" (UID: "87f31dc5-9567-43f5-a2a8-34115d352a77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:21 crc kubenswrapper[4741]: I0226 08:42:21.903575 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f31dc5-9567-43f5-a2a8-34115d352a77-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.342557 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87f31dc5-9567-43f5-a2a8-34115d352a77","Type":"ContainerDied","Data":"db3913af3c08bbde3a6433140796deacda01124a7a8dd2b2398c203b1b4634fd"} Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.342638 4741 scope.go:117] "RemoveContainer" containerID="18b4ee99348dae62b079916adb6704bf2569489306dbfbe47c6b4cc719c79cf3" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.342933 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.381092 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.409846 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.469652 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470361 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-central-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470381 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-central-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470394 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470404 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f236a3-4736-4131-8e96-130f6aede3f2" containerName="heat-cfnapi" Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470445 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="sg-core" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470452 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="sg-core" Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470476 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470483 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470499 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-notification-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470508 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-notification-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: E0226 08:42:22.470523 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="proxy-httpd" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470529 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="proxy-httpd" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470826 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa908b-305e-4137-a861-2fac8d3c46aa" containerName="heat-api" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470844 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="proxy-httpd" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470869 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-central-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470880 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="ceilometer-notification-agent" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.470894 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" containerName="sg-core" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.473648 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.476935 4741 scope.go:117] "RemoveContainer" containerID="4d8a085b66a5e51fce4ece5736e031d65acf2d8fd82ff40afbb093e0006df4f7" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.477380 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.477837 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.481914 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.521911 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522001 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522054 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522079 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522122 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7htm\" (UniqueName: \"kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522154 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.522261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.523967 4741 scope.go:117] "RemoveContainer" containerID="860a4e1390092d0be3c5e1e9988138441cff65dbe008d5ef765d8c9d68948d5e" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.549542 4741 scope.go:117] "RemoveContainer" containerID="6cde56754bc3729ade3971a555170a3c1556cad73f23a985e9b6b902313b3484" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624517 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7htm\" (UniqueName: \"kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624588 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624702 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624795 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624837 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624876 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.624896 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.625420 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.626414 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.631081 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.631339 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.632157 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.648179 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7htm\" (UniqueName: \"kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.655332 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data\") pod \"ceilometer-0\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " pod="openstack/ceilometer-0" Feb 26 08:42:22 crc kubenswrapper[4741]: I0226 08:42:22.816412 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:23 crc kubenswrapper[4741]: I0226 08:42:23.401657 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:23 crc kubenswrapper[4741]: W0226 08:42:23.424445 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b32735_2c86_408e_b035_6d73a2eac5ca.slice/crio-7ee69586cbf9588932b613c57b8c8d87b0897fd520e1b62718eaee6673bb23d8 WatchSource:0}: Error finding container 7ee69586cbf9588932b613c57b8c8d87b0897fd520e1b62718eaee6673bb23d8: Status 404 returned error can't find the container with id 7ee69586cbf9588932b613c57b8c8d87b0897fd520e1b62718eaee6673bb23d8 Feb 26 08:42:23 crc kubenswrapper[4741]: I0226 08:42:23.809562 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f31dc5-9567-43f5-a2a8-34115d352a77" path="/var/lib/kubelet/pods/87f31dc5-9567-43f5-a2a8-34115d352a77/volumes" Feb 26 08:42:24 crc kubenswrapper[4741]: I0226 08:42:24.384428 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerStarted","Data":"2cd11f58702241594e3e9bb1a735b10fb6341f2e05fa9c955fc2afa3d6ad808d"} Feb 26 08:42:24 crc kubenswrapper[4741]: I0226 08:42:24.384979 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerStarted","Data":"7ee69586cbf9588932b613c57b8c8d87b0897fd520e1b62718eaee6673bb23d8"} Feb 26 08:42:25 crc kubenswrapper[4741]: I0226 08:42:25.402022 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerStarted","Data":"8ea4539ffa3453051950ff66048f87cec5c79e0e32e1b5b917436d7e1e86debf"} Feb 26 08:42:27 crc kubenswrapper[4741]: I0226 08:42:27.789776 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:42:27 crc kubenswrapper[4741]: E0226 08:42:27.791136 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:42:29 crc kubenswrapper[4741]: I0226 08:42:29.483967 4741 generic.go:334] "Generic (PLEG): container finished" podID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" exitCode=0 Feb 26 08:42:29 crc kubenswrapper[4741]: I0226 08:42:29.484048 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55d8cd5998-s5j8z" event={"ID":"5a70d076-17f1-4f3e-bb9c-0f5740d59c27","Type":"ContainerDied","Data":"505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3"} Feb 26 08:42:30 crc kubenswrapper[4741]: E0226 08:42:30.839919 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3 is running failed: container process not found" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:30 crc kubenswrapper[4741]: E0226 08:42:30.841254 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3 is running failed: container process not found" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:30 crc kubenswrapper[4741]: E0226 08:42:30.841713 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3 is running failed: container process not found" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:42:30 crc kubenswrapper[4741]: E0226 08:42:30.841757 4741 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-55d8cd5998-s5j8z" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" Feb 26 08:42:32 crc kubenswrapper[4741]: I0226 08:42:32.204203 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.163856 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.292337 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xhr\" (UniqueName: \"kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr\") pod \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.292405 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data\") pod \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.292501 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle\") pod \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.292688 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom\") pod \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\" (UID: \"5a70d076-17f1-4f3e-bb9c-0f5740d59c27\") " Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.299655 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a70d076-17f1-4f3e-bb9c-0f5740d59c27" (UID: "5a70d076-17f1-4f3e-bb9c-0f5740d59c27"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.299904 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr" (OuterVolumeSpecName: "kube-api-access-x2xhr") pod "5a70d076-17f1-4f3e-bb9c-0f5740d59c27" (UID: "5a70d076-17f1-4f3e-bb9c-0f5740d59c27"). InnerVolumeSpecName "kube-api-access-x2xhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.334774 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a70d076-17f1-4f3e-bb9c-0f5740d59c27" (UID: "5a70d076-17f1-4f3e-bb9c-0f5740d59c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.368598 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data" (OuterVolumeSpecName: "config-data") pod "5a70d076-17f1-4f3e-bb9c-0f5740d59c27" (UID: "5a70d076-17f1-4f3e-bb9c-0f5740d59c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.396236 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.396285 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xhr\" (UniqueName: \"kubernetes.io/projected/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-kube-api-access-x2xhr\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.396298 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.396312 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a70d076-17f1-4f3e-bb9c-0f5740d59c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.548601 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55d8cd5998-s5j8z" event={"ID":"5a70d076-17f1-4f3e-bb9c-0f5740d59c27","Type":"ContainerDied","Data":"c8d4f6de392c68f45d0dbb09fb9d601b40d963e03482ddccc436a6b918309be9"} Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.548666 4741 scope.go:117] "RemoveContainer" containerID="505dcf4464bde24b1e4d0fdc9f6cae42b3e11780b63ee1c5faa6ca1c251a5fb3" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.549542 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55d8cd5998-s5j8z" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.551463 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerStarted","Data":"2621f7ce58e00fb6d6b5f8d88f4d284a5ba84c23944bb0b83e6334fdffcc797a"} Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.555100 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" event={"ID":"e11a47b6-7849-4cb0-9b25-c0a26225fba2","Type":"ContainerStarted","Data":"14dd285811cd230dcd3c517e6f9ecb1c0fc1ea4cc2021fe67587d31e0a03081b"} Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.578762 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" podStartSLOduration=2.678651258 podStartE2EDuration="14.578737243s" podCreationTimestamp="2026-02-26 08:42:19 +0000 UTC" firstStartedPulling="2026-02-26 08:42:20.81885579 +0000 UTC m=+1775.814793187" lastFinishedPulling="2026-02-26 08:42:32.718941785 +0000 UTC m=+1787.714879172" observedRunningTime="2026-02-26 08:42:33.576997994 +0000 UTC m=+1788.572935401" watchObservedRunningTime="2026-02-26 08:42:33.578737243 +0000 UTC m=+1788.574674630" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.620561 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.635860 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-55d8cd5998-s5j8z"] Feb 26 08:42:33 crc kubenswrapper[4741]: E0226 08:42:33.786321 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a70d076_17f1_4f3e_bb9c_0f5740d59c27.slice/crio-c8d4f6de392c68f45d0dbb09fb9d601b40d963e03482ddccc436a6b918309be9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a70d076_17f1_4f3e_bb9c_0f5740d59c27.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:42:33 crc kubenswrapper[4741]: I0226 08:42:33.855887 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" path="/var/lib/kubelet/pods/5a70d076-17f1-4f3e-bb9c-0f5740d59c27/volumes" Feb 26 08:42:35 crc kubenswrapper[4741]: I0226 08:42:35.588268 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerStarted","Data":"c1b51feb5347f8c9aa27d1a69aa34d6def5b499016a5538ccf62adea42c87c4d"} Feb 26 08:42:35 crc kubenswrapper[4741]: I0226 08:42:35.591374 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:42:35 crc kubenswrapper[4741]: I0226 08:42:35.626176 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.489608783 podStartE2EDuration="13.626150099s" podCreationTimestamp="2026-02-26 08:42:22 +0000 UTC" firstStartedPulling="2026-02-26 08:42:23.426924067 +0000 UTC m=+1778.422861444" lastFinishedPulling="2026-02-26 08:42:34.563465373 +0000 UTC m=+1789.559402760" observedRunningTime="2026-02-26 08:42:35.623698309 +0000 UTC m=+1790.619635696" watchObservedRunningTime="2026-02-26 08:42:35.626150099 +0000 UTC m=+1790.622087486" Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.540878 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.541943 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-central-agent" containerID="cri-o://2cd11f58702241594e3e9bb1a735b10fb6341f2e05fa9c955fc2afa3d6ad808d" gracePeriod=30 Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.541994 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="sg-core" containerID="cri-o://2621f7ce58e00fb6d6b5f8d88f4d284a5ba84c23944bb0b83e6334fdffcc797a" gracePeriod=30 Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.542155 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-notification-agent" containerID="cri-o://8ea4539ffa3453051950ff66048f87cec5c79e0e32e1b5b917436d7e1e86debf" gracePeriod=30 Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.542441 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="proxy-httpd" containerID="cri-o://c1b51feb5347f8c9aa27d1a69aa34d6def5b499016a5538ccf62adea42c87c4d" gracePeriod=30 Feb 26 08:42:39 crc kubenswrapper[4741]: I0226 08:42:39.787865 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:42:39 crc kubenswrapper[4741]: E0226 08:42:39.788615 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.672712 4741 generic.go:334] "Generic (PLEG): container finished" podID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerID="c1b51feb5347f8c9aa27d1a69aa34d6def5b499016a5538ccf62adea42c87c4d" exitCode=0 Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.673057 4741 generic.go:334] "Generic (PLEG): container finished" podID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerID="2621f7ce58e00fb6d6b5f8d88f4d284a5ba84c23944bb0b83e6334fdffcc797a" exitCode=2 Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.672775 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerDied","Data":"c1b51feb5347f8c9aa27d1a69aa34d6def5b499016a5538ccf62adea42c87c4d"} Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.673122 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerDied","Data":"2621f7ce58e00fb6d6b5f8d88f4d284a5ba84c23944bb0b83e6334fdffcc797a"} Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.673138 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerDied","Data":"2cd11f58702241594e3e9bb1a735b10fb6341f2e05fa9c955fc2afa3d6ad808d"} Feb 26 08:42:40 crc kubenswrapper[4741]: I0226 08:42:40.673068 4741 generic.go:334] "Generic (PLEG): container finished" podID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerID="2cd11f58702241594e3e9bb1a735b10fb6341f2e05fa9c955fc2afa3d6ad808d" exitCode=0 Feb 26 08:42:41 crc kubenswrapper[4741]: I0226 08:42:41.714120 4741 generic.go:334] "Generic (PLEG): container finished" podID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerID="8ea4539ffa3453051950ff66048f87cec5c79e0e32e1b5b917436d7e1e86debf" exitCode=0 Feb 26 08:42:41 crc kubenswrapper[4741]: I0226 08:42:41.714171 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerDied","Data":"8ea4539ffa3453051950ff66048f87cec5c79e0e32e1b5b917436d7e1e86debf"} Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.087272 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.172930 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.173768 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.173928 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7htm\" (UniqueName: \"kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.174209 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.174329 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.174446 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.174580 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd\") pod \"33b32735-2c86-408e-b035-6d73a2eac5ca\" (UID: \"33b32735-2c86-408e-b035-6d73a2eac5ca\") " Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.175434 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.175704 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.176238 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.185741 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts" (OuterVolumeSpecName: "scripts") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.194223 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm" (OuterVolumeSpecName: "kube-api-access-v7htm") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "kube-api-access-v7htm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.217174 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.289929 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.290321 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.290358 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.290388 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33b32735-2c86-408e-b035-6d73a2eac5ca-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.290396 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.290406 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7htm\" (UniqueName: \"kubernetes.io/projected/33b32735-2c86-408e-b035-6d73a2eac5ca-kube-api-access-v7htm\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.319827 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data" (OuterVolumeSpecName: "config-data") pod "33b32735-2c86-408e-b035-6d73a2eac5ca" (UID: "33b32735-2c86-408e-b035-6d73a2eac5ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.394178 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32735-2c86-408e-b035-6d73a2eac5ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.732623 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33b32735-2c86-408e-b035-6d73a2eac5ca","Type":"ContainerDied","Data":"7ee69586cbf9588932b613c57b8c8d87b0897fd520e1b62718eaee6673bb23d8"} Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.732704 4741 scope.go:117] "RemoveContainer" containerID="c1b51feb5347f8c9aa27d1a69aa34d6def5b499016a5538ccf62adea42c87c4d" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.732755 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.778125 4741 scope.go:117] "RemoveContainer" containerID="2621f7ce58e00fb6d6b5f8d88f4d284a5ba84c23944bb0b83e6334fdffcc797a" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.783129 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.799327 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.805675 4741 scope.go:117] "RemoveContainer" containerID="8ea4539ffa3453051950ff66048f87cec5c79e0e32e1b5b917436d7e1e86debf" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.815843 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:42 crc kubenswrapper[4741]: E0226 08:42:42.818882 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="sg-core" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.818994 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="sg-core" Feb 26 08:42:42 crc kubenswrapper[4741]: E0226 08:42:42.819092 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-central-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.819176 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-central-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: E0226 08:42:42.819321 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="proxy-httpd" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.819383 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="proxy-httpd" Feb 26 08:42:42 crc kubenswrapper[4741]: E0226 08:42:42.819466 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.819523 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" Feb 26 08:42:42 crc kubenswrapper[4741]: E0226 08:42:42.819612 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-notification-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.819672 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-notification-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.819962 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a70d076-17f1-4f3e-bb9c-0f5740d59c27" containerName="heat-engine" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.820046 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-central-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.820169 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="ceilometer-notification-agent" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.820250 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="sg-core" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.820331 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" containerName="proxy-httpd" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.823089 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.827356 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.827609 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.837905 4741 scope.go:117] "RemoveContainer" containerID="2cd11f58702241594e3e9bb1a735b10fb6341f2e05fa9c955fc2afa3d6ad808d" Feb 26 08:42:42 crc kubenswrapper[4741]: I0226 08:42:42.859799 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.011990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012037 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012078 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpz2x\" (UniqueName: \"kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012182 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012217 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012260 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.012650 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115188 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115681 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115735 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115823 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115948 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.115970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.116003 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpz2x\" (UniqueName: \"kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.117171 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.117169 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.121705 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.121831 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.122808 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.125881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.141818 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpz2x\" (UniqueName: \"kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x\") pod \"ceilometer-0\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.144920 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.699086 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.755023 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerStarted","Data":"d29ccaa91555ccadb81853488e7e49e5023eb22efad311703aa2569b6737cc52"} Feb 26 08:42:43 crc kubenswrapper[4741]: I0226 08:42:43.801954 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b32735-2c86-408e-b035-6d73a2eac5ca" path="/var/lib/kubelet/pods/33b32735-2c86-408e-b035-6d73a2eac5ca/volumes" Feb 26 08:42:44 crc kubenswrapper[4741]: I0226 08:42:44.774576 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerStarted","Data":"5cb6a5220aa1238c041f516674549a8a68caa8a0bb8b6ca231cfc96f2d6573c5"} Feb 26 08:42:45 crc kubenswrapper[4741]: I0226 08:42:45.875650 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerStarted","Data":"81843d81d7bd0783df7e81cfce00bb053eee8674563b77cdd8939046c29e42a8"} Feb 26 08:42:46 crc kubenswrapper[4741]: I0226 08:42:46.872716 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerStarted","Data":"4b83ca6e2e571b2ce0ebe921939504468f2deeb85528497e7869bb22bbc29685"} Feb 26 08:42:46 crc kubenswrapper[4741]: I0226 08:42:46.879516 4741 generic.go:334] "Generic (PLEG): container finished" podID="e11a47b6-7849-4cb0-9b25-c0a26225fba2" containerID="14dd285811cd230dcd3c517e6f9ecb1c0fc1ea4cc2021fe67587d31e0a03081b" exitCode=0 Feb 26 08:42:46 crc kubenswrapper[4741]: I0226 08:42:46.879584 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" event={"ID":"e11a47b6-7849-4cb0-9b25-c0a26225fba2","Type":"ContainerDied","Data":"14dd285811cd230dcd3c517e6f9ecb1c0fc1ea4cc2021fe67587d31e0a03081b"} Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.433937 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.511306 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts\") pod \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.511561 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data\") pod \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.511903 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpfxk\" (UniqueName: \"kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk\") pod \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.512039 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle\") pod \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\" (UID: \"e11a47b6-7849-4cb0-9b25-c0a26225fba2\") " Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.520647 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts" (OuterVolumeSpecName: "scripts") pod "e11a47b6-7849-4cb0-9b25-c0a26225fba2" (UID: "e11a47b6-7849-4cb0-9b25-c0a26225fba2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.538093 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk" (OuterVolumeSpecName: "kube-api-access-bpfxk") pod "e11a47b6-7849-4cb0-9b25-c0a26225fba2" (UID: "e11a47b6-7849-4cb0-9b25-c0a26225fba2"). InnerVolumeSpecName "kube-api-access-bpfxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.556196 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e11a47b6-7849-4cb0-9b25-c0a26225fba2" (UID: "e11a47b6-7849-4cb0-9b25-c0a26225fba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.563175 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data" (OuterVolumeSpecName: "config-data") pod "e11a47b6-7849-4cb0-9b25-c0a26225fba2" (UID: "e11a47b6-7849-4cb0-9b25-c0a26225fba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.615638 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpfxk\" (UniqueName: \"kubernetes.io/projected/e11a47b6-7849-4cb0-9b25-c0a26225fba2-kube-api-access-bpfxk\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.615921 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.615992 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.616064 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11a47b6-7849-4cb0-9b25-c0a26225fba2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.908713 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerStarted","Data":"f1249916a0ed5a8a5b511e1e6740aaa52cfef2ce08c8cf322eeea578de2d8308"} Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.912269 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.915698 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" event={"ID":"e11a47b6-7849-4cb0-9b25-c0a26225fba2","Type":"ContainerDied","Data":"55647c89af5f6d49fba79939c8cccb6bdf60526182ef453f6dfa284c0bddeba8"} Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.915761 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55647c89af5f6d49fba79939c8cccb6bdf60526182ef453f6dfa284c0bddeba8" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.915853 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ctw5f" Feb 26 08:42:48 crc kubenswrapper[4741]: I0226 08:42:48.966466 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.499276397 podStartE2EDuration="6.966431796s" podCreationTimestamp="2026-02-26 08:42:42 +0000 UTC" firstStartedPulling="2026-02-26 08:42:43.718779674 +0000 UTC m=+1798.714717061" lastFinishedPulling="2026-02-26 08:42:48.185935073 +0000 UTC m=+1803.181872460" observedRunningTime="2026-02-26 08:42:48.942793404 +0000 UTC m=+1803.938730791" watchObservedRunningTime="2026-02-26 08:42:48.966431796 +0000 UTC m=+1803.962369183" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.080056 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 08:42:49 crc kubenswrapper[4741]: E0226 08:42:49.080800 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11a47b6-7849-4cb0-9b25-c0a26225fba2" containerName="nova-cell0-conductor-db-sync" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.080827 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11a47b6-7849-4cb0-9b25-c0a26225fba2" containerName="nova-cell0-conductor-db-sync" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.081137 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11a47b6-7849-4cb0-9b25-c0a26225fba2" containerName="nova-cell0-conductor-db-sync" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.082285 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.085354 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.090423 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9m8cf" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.097093 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.211272 4741 scope.go:117] "RemoveContainer" containerID="175267b7510bc89015e793f52051f1c8364c1b56302ecbf0c623a22ea13b77d8" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.234626 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.235474 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmzg\" (UniqueName: \"kubernetes.io/projected/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-kube-api-access-mjmzg\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.235627 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.337976 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmzg\" (UniqueName: \"kubernetes.io/projected/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-kube-api-access-mjmzg\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.338128 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.338317 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.345219 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.346193 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.360029 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmzg\" (UniqueName: \"kubernetes.io/projected/1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b-kube-api-access-mjmzg\") pod \"nova-cell0-conductor-0\" (UID: \"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:49 crc kubenswrapper[4741]: I0226 08:42:49.441665 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:50 crc kubenswrapper[4741]: I0226 08:42:50.046354 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 08:42:50 crc kubenswrapper[4741]: I0226 08:42:50.946574 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b","Type":"ContainerStarted","Data":"b1733de75ca41b29215e55ae04b234e9cde7c0b31fe5de3925ad36174fbf6a10"} Feb 26 08:42:50 crc kubenswrapper[4741]: I0226 08:42:50.946623 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b","Type":"ContainerStarted","Data":"0308a197ed7b4fdcd66cfae72acea34dcab5a5521b57ea0cbcc0155e703ec64f"} Feb 26 08:42:50 crc kubenswrapper[4741]: I0226 08:42:50.946741 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 08:42:50 crc kubenswrapper[4741]: I0226 08:42:50.974560 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.9720788040000001 podStartE2EDuration="1.972078804s" podCreationTimestamp="2026-02-26 08:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:42:50.965977601 +0000 UTC m=+1805.961914998" watchObservedRunningTime="2026-02-26 08:42:50.972078804 +0000 UTC m=+1805.968016201" Feb 26 08:42:51 crc kubenswrapper[4741]: I0226 08:42:51.972885 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-q775j"] Feb 26 08:42:51 crc kubenswrapper[4741]: I0226 08:42:51.980372 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q775j" Feb 26 08:42:51 crc kubenswrapper[4741]: I0226 08:42:51.997984 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-3ddd-account-create-update-28w7b"] Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.000726 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.003130 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.039481 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-q775j"] Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.053507 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3ddd-account-create-update-28w7b"] Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.136067 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.136247 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqx4\" (UniqueName: \"kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.136600 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchsf\" (UniqueName: \"kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.136962 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.240083 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hchsf\" (UniqueName: \"kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.240275 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.240396 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.240465 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqx4\" (UniqueName: \"kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.241394 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.241700 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.273321 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqx4\" (UniqueName: \"kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4\") pod \"aodh-3ddd-account-create-update-28w7b\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.279047 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hchsf\" (UniqueName: \"kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf\") pod \"aodh-db-create-q775j\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.345619 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q775j" Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.363088 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:52 crc kubenswrapper[4741]: W0226 08:42:52.967029 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c3d5ff_f72f_43b9_8df5_be734ffa83c0.slice/crio-6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28 WatchSource:0}: Error finding container 6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28: Status 404 returned error can't find the container with id 6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28 Feb 26 08:42:52 crc kubenswrapper[4741]: I0226 08:42:52.975603 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-q775j"] Feb 26 08:42:53 crc kubenswrapper[4741]: I0226 08:42:53.006800 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q775j" event={"ID":"87c3d5ff-f72f-43b9-8df5-be734ffa83c0","Type":"ContainerStarted","Data":"6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28"} Feb 26 08:42:53 crc kubenswrapper[4741]: I0226 08:42:53.032505 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3ddd-account-create-update-28w7b"] Feb 26 08:42:53 crc kubenswrapper[4741]: I0226 08:42:53.788524 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:42:53 crc kubenswrapper[4741]: E0226 08:42:53.789288 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:42:54 crc kubenswrapper[4741]: I0226 08:42:54.021729 4741 generic.go:334] "Generic (PLEG): container finished" podID="87c3d5ff-f72f-43b9-8df5-be734ffa83c0" containerID="b0b14c2a5b1fb83519812d583e681072de2d28ac03a566754d0dc85a451c4896" exitCode=0 Feb 26 08:42:54 crc kubenswrapper[4741]: I0226 08:42:54.021891 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q775j" event={"ID":"87c3d5ff-f72f-43b9-8df5-be734ffa83c0","Type":"ContainerDied","Data":"b0b14c2a5b1fb83519812d583e681072de2d28ac03a566754d0dc85a451c4896"} Feb 26 08:42:54 crc kubenswrapper[4741]: I0226 08:42:54.025516 4741 generic.go:334] "Generic (PLEG): container finished" podID="a78221f1-6070-4847-9f6e-88867af64c21" containerID="79b76d2bd414c769c9c2b17526395c32dc47e129a295b6971757049b77e4efa0" exitCode=0 Feb 26 08:42:54 crc kubenswrapper[4741]: I0226 08:42:54.025584 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3ddd-account-create-update-28w7b" event={"ID":"a78221f1-6070-4847-9f6e-88867af64c21","Type":"ContainerDied","Data":"79b76d2bd414c769c9c2b17526395c32dc47e129a295b6971757049b77e4efa0"} Feb 26 08:42:54 crc kubenswrapper[4741]: I0226 08:42:54.025612 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3ddd-account-create-update-28w7b" event={"ID":"a78221f1-6070-4847-9f6e-88867af64c21","Type":"ContainerStarted","Data":"e6d00acf19ffc7d20ce93a2ba885e0c08c842facaa7edfbe22f03dada8cd9c59"} Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.568409 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.582215 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q775j" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.654734 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dqx4\" (UniqueName: \"kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4\") pod \"a78221f1-6070-4847-9f6e-88867af64c21\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.654981 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts\") pod \"a78221f1-6070-4847-9f6e-88867af64c21\" (UID: \"a78221f1-6070-4847-9f6e-88867af64c21\") " Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.657229 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a78221f1-6070-4847-9f6e-88867af64c21" (UID: "a78221f1-6070-4847-9f6e-88867af64c21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.662896 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4" (OuterVolumeSpecName: "kube-api-access-7dqx4") pod "a78221f1-6070-4847-9f6e-88867af64c21" (UID: "a78221f1-6070-4847-9f6e-88867af64c21"). InnerVolumeSpecName "kube-api-access-7dqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.759009 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts\") pod \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.759061 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hchsf\" (UniqueName: \"kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf\") pod \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\" (UID: \"87c3d5ff-f72f-43b9-8df5-be734ffa83c0\") " Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.759555 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87c3d5ff-f72f-43b9-8df5-be734ffa83c0" (UID: "87c3d5ff-f72f-43b9-8df5-be734ffa83c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.760097 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dqx4\" (UniqueName: \"kubernetes.io/projected/a78221f1-6070-4847-9f6e-88867af64c21-kube-api-access-7dqx4\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.760138 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a78221f1-6070-4847-9f6e-88867af64c21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.760152 4741 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.762179 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf" (OuterVolumeSpecName: "kube-api-access-hchsf") pod "87c3d5ff-f72f-43b9-8df5-be734ffa83c0" (UID: "87c3d5ff-f72f-43b9-8df5-be734ffa83c0"). InnerVolumeSpecName "kube-api-access-hchsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:42:55 crc kubenswrapper[4741]: I0226 08:42:55.862547 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hchsf\" (UniqueName: \"kubernetes.io/projected/87c3d5ff-f72f-43b9-8df5-be734ffa83c0-kube-api-access-hchsf\") on node \"crc\" DevicePath \"\"" Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.056740 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3ddd-account-create-update-28w7b" Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.057167 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3ddd-account-create-update-28w7b" event={"ID":"a78221f1-6070-4847-9f6e-88867af64c21","Type":"ContainerDied","Data":"e6d00acf19ffc7d20ce93a2ba885e0c08c842facaa7edfbe22f03dada8cd9c59"} Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.057248 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6d00acf19ffc7d20ce93a2ba885e0c08c842facaa7edfbe22f03dada8cd9c59" Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.058689 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q775j" event={"ID":"87c3d5ff-f72f-43b9-8df5-be734ffa83c0","Type":"ContainerDied","Data":"6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28"} Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.058799 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6624e91964947984ceb0323c89229674bf14a2c8bf91cb2a7a8caa601d07dd28" Feb 26 08:42:56 crc kubenswrapper[4741]: I0226 08:42:56.058821 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q775j" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.406638 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-m2b27"] Feb 26 08:42:57 crc kubenswrapper[4741]: E0226 08:42:57.407401 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78221f1-6070-4847-9f6e-88867af64c21" containerName="mariadb-account-create-update" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.407420 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78221f1-6070-4847-9f6e-88867af64c21" containerName="mariadb-account-create-update" Feb 26 08:42:57 crc kubenswrapper[4741]: E0226 08:42:57.407431 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c3d5ff-f72f-43b9-8df5-be734ffa83c0" containerName="mariadb-database-create" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.407439 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c3d5ff-f72f-43b9-8df5-be734ffa83c0" containerName="mariadb-database-create" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.407747 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78221f1-6070-4847-9f6e-88867af64c21" containerName="mariadb-account-create-update" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.407791 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c3d5ff-f72f-43b9-8df5-be734ffa83c0" containerName="mariadb-database-create" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.409162 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.415800 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.416192 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tlszt" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.422889 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.422978 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.433275 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-m2b27"] Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.509489 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.509717 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.509990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.510357 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrh7\" (UniqueName: \"kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.613836 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.614006 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrh7\" (UniqueName: \"kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.614200 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.614323 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.623954 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.624799 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.629157 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.664257 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrh7\" (UniqueName: \"kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7\") pod \"aodh-db-sync-m2b27\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:57 crc kubenswrapper[4741]: I0226 08:42:57.744634 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m2b27" Feb 26 08:42:58 crc kubenswrapper[4741]: W0226 08:42:58.316340 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e2d3d2a_ca82_46f2_99fa_bfa2d2a5c153.slice/crio-c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f WatchSource:0}: Error finding container c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f: Status 404 returned error can't find the container with id c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f Feb 26 08:42:58 crc kubenswrapper[4741]: I0226 08:42:58.318791 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-m2b27"] Feb 26 08:42:59 crc kubenswrapper[4741]: I0226 08:42:59.130437 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m2b27" event={"ID":"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153","Type":"ContainerStarted","Data":"c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f"} Feb 26 08:42:59 crc kubenswrapper[4741]: I0226 08:42:59.514144 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.101728 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-sd6r4"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.104407 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.107776 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.112879 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.150721 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sd6r4"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.204988 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph84m\" (UniqueName: \"kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.205280 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.205332 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.205458 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.309444 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph84m\" (UniqueName: \"kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.309648 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.309690 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.309826 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.331806 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.332463 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.348987 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.371118 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph84m\" (UniqueName: \"kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m\") pod \"nova-cell0-cell-mapping-sd6r4\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.375674 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.378447 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.385912 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.448665 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.481609 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.517542 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.517684 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.517778 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.518053 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knch\" (UniqueName: \"kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.555805 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.557765 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.564501 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.609176 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.612661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.616728 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622352 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knch\" (UniqueName: \"kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622415 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622491 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622525 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622563 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622586 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.622678 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hddd\" (UniqueName: \"kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.624087 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.647777 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.662016 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.667394 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knch\" (UniqueName: \"kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch\") pod \"nova-api-0\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.673174 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.709737 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.731767 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hddd\" (UniqueName: \"kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.731905 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.731990 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.732008 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvdb\" (UniqueName: \"kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.732062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.732220 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.745632 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.746169 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.768370 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hddd\" (UniqueName: \"kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.811194 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.817988 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.825469 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.836391 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.836677 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.836711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvdb\" (UniqueName: \"kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.845172 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.858025 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.876180 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvdb\" (UniqueName: \"kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.876254 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.878981 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.982059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.983005 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:00 crc kubenswrapper[4741]: I0226 08:43:00.984492 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.006436 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxx9f\" (UniqueName: \"kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.007605 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.048012 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.051790 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.107742 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.114577 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.114642 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.114706 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.114766 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.114795 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxx9f\" (UniqueName: \"kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.115083 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.115208 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.115271 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.115506 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.115756 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9pj\" (UniqueName: \"kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.117313 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.127104 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.128081 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.146877 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxx9f\" (UniqueName: \"kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f\") pod \"nova-metadata-0\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.167457 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.217725 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9pj\" (UniqueName: \"kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.217815 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.217857 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.217936 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.218023 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.218136 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.218982 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.219102 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.219387 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.219793 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.220045 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.230869 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.255837 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9pj\" (UniqueName: \"kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj\") pod \"dnsmasq-dns-5fbc4d444f-s26dw\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:01 crc kubenswrapper[4741]: I0226 08:43:01.412830 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.625935 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vstzs"] Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.629033 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.632915 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.633271 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.662089 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vstzs"] Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.666891 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42c5t\" (UniqueName: \"kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.666985 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.667071 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.667131 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.768792 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42c5t\" (UniqueName: \"kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.768885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.768955 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.768991 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.783054 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.783896 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.787925 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.792649 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42c5t\" (UniqueName: \"kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t\") pod \"nova-cell1-conductor-db-sync-vstzs\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:02 crc kubenswrapper[4741]: I0226 08:43:02.972119 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:04 crc kubenswrapper[4741]: I0226 08:43:04.699285 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:04 crc kubenswrapper[4741]: I0226 08:43:04.756416 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:06 crc kubenswrapper[4741]: I0226 08:43:06.788578 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:43:06 crc kubenswrapper[4741]: E0226 08:43:06.790027 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.005036 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:08 crc kubenswrapper[4741]: W0226 08:43:08.053368 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf09ad43d_1fea_4335_97aa_5428b9be77dd.slice/crio-894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e WatchSource:0}: Error finding container 894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e: Status 404 returned error can't find the container with id 894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.084341 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sd6r4"] Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.322545 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerStarted","Data":"1980d4128c83d74968ed8bc99fefece8635cdf66e2132f26dd59a5972b09991b"} Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.332371 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sd6r4" event={"ID":"f09ad43d-1fea-4335-97aa-5428b9be77dd","Type":"ContainerStarted","Data":"894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e"} Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.355643 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m2b27" event={"ID":"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153","Type":"ContainerStarted","Data":"c27fb3161fe1f4c1a6196dd49acea1f335f3e9177e9f5c639e3639fee0c27a22"} Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.383144 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.401975 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.420833 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vstzs"] Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.440877 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.457316 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-m2b27" podStartSLOduration=2.739248061 podStartE2EDuration="11.457260555s" podCreationTimestamp="2026-02-26 08:42:57 +0000 UTC" firstStartedPulling="2026-02-26 08:42:58.319790879 +0000 UTC m=+1813.315728266" lastFinishedPulling="2026-02-26 08:43:07.037803373 +0000 UTC m=+1822.033740760" observedRunningTime="2026-02-26 08:43:08.386665638 +0000 UTC m=+1823.382603025" watchObservedRunningTime="2026-02-26 08:43:08.457260555 +0000 UTC m=+1823.453197932" Feb 26 08:43:08 crc kubenswrapper[4741]: I0226 08:43:08.508102 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:08 crc kubenswrapper[4741]: W0226 08:43:08.523389 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7c12564_44ab_408c_b2ed_2466bff3274d.slice/crio-6521e2072922f79b1f3728183b54edd24fa8b3295eef416b62f33176363fd9fb WatchSource:0}: Error finding container 6521e2072922f79b1f3728183b54edd24fa8b3295eef416b62f33176363fd9fb: Status 404 returned error can't find the container with id 6521e2072922f79b1f3728183b54edd24fa8b3295eef416b62f33176363fd9fb Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.377753 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerID="3b4f0d2e7eb6ecc43cc249861696caea39d12311905d5c7ea73e440403ee976f" exitCode=0 Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.377867 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" event={"ID":"3b59536e-ad66-4a9d-89a6-2a6479e8be01","Type":"ContainerDied","Data":"3b4f0d2e7eb6ecc43cc249861696caea39d12311905d5c7ea73e440403ee976f"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.378323 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" event={"ID":"3b59536e-ad66-4a9d-89a6-2a6479e8be01","Type":"ContainerStarted","Data":"e132243dae9161d9853bff5efdd8d9118670d3e96098180ff73e314aeb6ade67"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.386957 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sd6r4" event={"ID":"f09ad43d-1fea-4335-97aa-5428b9be77dd","Type":"ContainerStarted","Data":"91d6c32f01f092463fa2ca40792b8c9692da94955b74b493c2cbf6f85e1985f8"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.405369 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerStarted","Data":"6847e27710d40a2659a175c018039b2182648d33ecec42c16b06246b16a99753"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.423121 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vstzs" event={"ID":"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78","Type":"ContainerStarted","Data":"6835d6f1676b747eae4836ceccf97dcf45c19870031b5c5d2b045beb3f9acd1f"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.423197 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vstzs" event={"ID":"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78","Type":"ContainerStarted","Data":"0516e2f896f9e1c6c90975b19c7bf61e3afe1dc88450f5a460a88db879f2033d"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.429953 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7c12564-44ab-408c-b2ed-2466bff3274d","Type":"ContainerStarted","Data":"6521e2072922f79b1f3728183b54edd24fa8b3295eef416b62f33176363fd9fb"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.432281 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05be8a42-88d6-4e4d-8ed5-0a8281655c9d","Type":"ContainerStarted","Data":"3a460db19b21645e2ca82757e282866a43cda9dbee92edca65744ccf5dda0ce0"} Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.485446 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vstzs" podStartSLOduration=7.485414543 podStartE2EDuration="7.485414543s" podCreationTimestamp="2026-02-26 08:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:09.467879825 +0000 UTC m=+1824.463817212" watchObservedRunningTime="2026-02-26 08:43:09.485414543 +0000 UTC m=+1824.481351930" Feb 26 08:43:09 crc kubenswrapper[4741]: I0226 08:43:09.487097 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-sd6r4" podStartSLOduration=9.487082000000001 podStartE2EDuration="9.487082s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:09.444697776 +0000 UTC m=+1824.440635173" watchObservedRunningTime="2026-02-26 08:43:09.487082 +0000 UTC m=+1824.483019387" Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.505471 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7c12564-44ab-408c-b2ed-2466bff3274d","Type":"ContainerStarted","Data":"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.505646 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f7c12564-44ab-408c-b2ed-2466bff3274d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff" gracePeriod=30 Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.507757 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05be8a42-88d6-4e4d-8ed5-0a8281655c9d","Type":"ContainerStarted","Data":"52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.514859 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerStarted","Data":"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.517897 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" event={"ID":"3b59536e-ad66-4a9d-89a6-2a6479e8be01","Type":"ContainerStarted","Data":"9ea4157099a711c21f9106b8271267a0332e3615f60d9fdee52b05841234c597"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.518020 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.520335 4741 generic.go:334] "Generic (PLEG): container finished" podID="7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" containerID="c27fb3161fe1f4c1a6196dd49acea1f335f3e9177e9f5c639e3639fee0c27a22" exitCode=0 Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.520419 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m2b27" event={"ID":"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153","Type":"ContainerDied","Data":"c27fb3161fe1f4c1a6196dd49acea1f335f3e9177e9f5c639e3639fee0c27a22"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.529988 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=9.272777939000001 podStartE2EDuration="12.529962303s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="2026-02-26 08:43:08.528183061 +0000 UTC m=+1823.524120448" lastFinishedPulling="2026-02-26 08:43:11.785367425 +0000 UTC m=+1826.781304812" observedRunningTime="2026-02-26 08:43:12.523739006 +0000 UTC m=+1827.519676393" watchObservedRunningTime="2026-02-26 08:43:12.529962303 +0000 UTC m=+1827.525899690" Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.535577 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerStarted","Data":"b58a93bbbbf7733b6039451b71783dc57a30f2730b6b341ceab0e8d245577726"} Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.560958 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" podStartSLOduration=12.560936253 podStartE2EDuration="12.560936253s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:12.553715398 +0000 UTC m=+1827.549652795" watchObservedRunningTime="2026-02-26 08:43:12.560936253 +0000 UTC m=+1827.556873640" Feb 26 08:43:12 crc kubenswrapper[4741]: I0226 08:43:12.605470 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=9.227033788 podStartE2EDuration="12.605446939s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="2026-02-26 08:43:08.405852543 +0000 UTC m=+1823.401789920" lastFinishedPulling="2026-02-26 08:43:11.784265684 +0000 UTC m=+1826.780203071" observedRunningTime="2026-02-26 08:43:12.592745038 +0000 UTC m=+1827.588682425" watchObservedRunningTime="2026-02-26 08:43:12.605446939 +0000 UTC m=+1827.601384326" Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.165899 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.558782 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-log" containerID="cri-o://de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" gracePeriod=30 Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.559758 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerStarted","Data":"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5"} Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.560720 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-metadata" containerID="cri-o://24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" gracePeriod=30 Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.563332 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerStarted","Data":"75515b60f6f9e1f22d431794d8fffd31ff0c757a8642d0222b5de30b6a4e8262"} Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.605135 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.846326463 podStartE2EDuration="13.605093857s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="2026-02-26 08:43:08.025746917 +0000 UTC m=+1823.021684304" lastFinishedPulling="2026-02-26 08:43:11.784514311 +0000 UTC m=+1826.780451698" observedRunningTime="2026-02-26 08:43:13.587485436 +0000 UTC m=+1828.583422843" watchObservedRunningTime="2026-02-26 08:43:13.605093857 +0000 UTC m=+1828.601031244" Feb 26 08:43:13 crc kubenswrapper[4741]: I0226 08:43:13.615878 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=10.236211056 podStartE2EDuration="13.615850242s" podCreationTimestamp="2026-02-26 08:43:00 +0000 UTC" firstStartedPulling="2026-02-26 08:43:08.405689398 +0000 UTC m=+1823.401626785" lastFinishedPulling="2026-02-26 08:43:11.785328574 +0000 UTC m=+1826.781265971" observedRunningTime="2026-02-26 08:43:13.612357433 +0000 UTC m=+1828.608294820" watchObservedRunningTime="2026-02-26 08:43:13.615850242 +0000 UTC m=+1828.611787629" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.046050 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m2b27" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.152078 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle\") pod \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.152430 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts\") pod \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.152541 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data\") pod \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.152662 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wrh7\" (UniqueName: \"kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7\") pod \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\" (UID: \"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.168361 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts" (OuterVolumeSpecName: "scripts") pod "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" (UID: "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.170686 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7" (OuterVolumeSpecName: "kube-api-access-5wrh7") pod "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" (UID: "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153"). InnerVolumeSpecName "kube-api-access-5wrh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.215922 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data" (OuterVolumeSpecName: "config-data") pod "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" (UID: "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.237412 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" (UID: "7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.256996 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.257041 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.257053 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.257061 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wrh7\" (UniqueName: \"kubernetes.io/projected/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153-kube-api-access-5wrh7\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.382070 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.462187 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs\") pod \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.462378 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle\") pod \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.462484 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data\") pod \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.462507 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxx9f\" (UniqueName: \"kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f\") pod \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\" (UID: \"2f9b0f39-419d-4b99-b497-d16adb6a6afc\") " Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.464864 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs" (OuterVolumeSpecName: "logs") pod "2f9b0f39-419d-4b99-b497-d16adb6a6afc" (UID: "2f9b0f39-419d-4b99-b497-d16adb6a6afc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.481629 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f" (OuterVolumeSpecName: "kube-api-access-kxx9f") pod "2f9b0f39-419d-4b99-b497-d16adb6a6afc" (UID: "2f9b0f39-419d-4b99-b497-d16adb6a6afc"). InnerVolumeSpecName "kube-api-access-kxx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.509347 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f9b0f39-419d-4b99-b497-d16adb6a6afc" (UID: "2f9b0f39-419d-4b99-b497-d16adb6a6afc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.515213 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data" (OuterVolumeSpecName: "config-data") pod "2f9b0f39-419d-4b99-b497-d16adb6a6afc" (UID: "2f9b0f39-419d-4b99-b497-d16adb6a6afc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.565889 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9b0f39-419d-4b99-b497-d16adb6a6afc-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.565925 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.565940 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9b0f39-419d-4b99-b497-d16adb6a6afc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.565950 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxx9f\" (UniqueName: \"kubernetes.io/projected/2f9b0f39-419d-4b99-b497-d16adb6a6afc-kube-api-access-kxx9f\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.579818 4741 generic.go:334] "Generic (PLEG): container finished" podID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerID="24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" exitCode=0 Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.579874 4741 generic.go:334] "Generic (PLEG): container finished" podID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerID="de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" exitCode=143 Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.579922 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.579935 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerDied","Data":"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5"} Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.580078 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerDied","Data":"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b"} Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.580093 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9b0f39-419d-4b99-b497-d16adb6a6afc","Type":"ContainerDied","Data":"1980d4128c83d74968ed8bc99fefece8635cdf66e2132f26dd59a5972b09991b"} Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.580220 4741 scope.go:117] "RemoveContainer" containerID="24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.584962 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m2b27" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.592402 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m2b27" event={"ID":"7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153","Type":"ContainerDied","Data":"c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f"} Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.592470 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0a7b103309cf4955e07904d0bfc5c6374664bbcdb664a7aaa5718f4fbd1bd3f" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.615780 4741 scope.go:117] "RemoveContainer" containerID="de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.636524 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.652881 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.654378 4741 scope.go:117] "RemoveContainer" containerID="24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" Feb 26 08:43:14 crc kubenswrapper[4741]: E0226 08:43:14.658264 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5\": container with ID starting with 24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5 not found: ID does not exist" containerID="24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.658317 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5"} err="failed to get container status \"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5\": rpc error: code = NotFound desc = could not find container \"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5\": container with ID starting with 24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5 not found: ID does not exist" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.658349 4741 scope.go:117] "RemoveContainer" containerID="de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" Feb 26 08:43:14 crc kubenswrapper[4741]: E0226 08:43:14.661258 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b\": container with ID starting with de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b not found: ID does not exist" containerID="de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.661291 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b"} err="failed to get container status \"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b\": rpc error: code = NotFound desc = could not find container \"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b\": container with ID starting with de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b not found: ID does not exist" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.661315 4741 scope.go:117] "RemoveContainer" containerID="24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.665399 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5"} err="failed to get container status \"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5\": rpc error: code = NotFound desc = could not find container \"24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5\": container with ID starting with 24dec1a082cf40f3457166348209618bb4880fa03c957bf3112cd16271abd9c5 not found: ID does not exist" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.665460 4741 scope.go:117] "RemoveContainer" containerID="de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.665911 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b"} err="failed to get container status \"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b\": rpc error: code = NotFound desc = could not find container \"de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b\": container with ID starting with de5194239d52b8ca877d91eb23ddd4d4c5505311b1231587c5ee24c536380f8b not found: ID does not exist" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.689883 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:14 crc kubenswrapper[4741]: E0226 08:43:14.692270 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" containerName="aodh-db-sync" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.692311 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" containerName="aodh-db-sync" Feb 26 08:43:14 crc kubenswrapper[4741]: E0226 08:43:14.692342 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-log" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.692352 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-log" Feb 26 08:43:14 crc kubenswrapper[4741]: E0226 08:43:14.692386 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-metadata" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.692392 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-metadata" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.692700 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-log" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.692729 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" containerName="aodh-db-sync" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.695924 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" containerName="nova-metadata-metadata" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.701523 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.709413 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.709630 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.709817 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.772751 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q568z\" (UniqueName: \"kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.772912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.772984 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.773145 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.773174 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.886762 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.886866 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.886985 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.887015 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.887081 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q568z\" (UniqueName: \"kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.887814 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.896641 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.900875 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.906875 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:14 crc kubenswrapper[4741]: I0226 08:43:14.923344 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q568z\" (UniqueName: \"kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z\") pod \"nova-metadata-0\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " pod="openstack/nova-metadata-0" Feb 26 08:43:15 crc kubenswrapper[4741]: I0226 08:43:15.036292 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:15 crc kubenswrapper[4741]: I0226 08:43:15.651568 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:15 crc kubenswrapper[4741]: I0226 08:43:15.804399 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f9b0f39-419d-4b99-b497-d16adb6a6afc" path="/var/lib/kubelet/pods/2f9b0f39-419d-4b99-b497-d16adb6a6afc/volumes" Feb 26 08:43:15 crc kubenswrapper[4741]: I0226 08:43:15.979763 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:16 crc kubenswrapper[4741]: I0226 08:43:16.168432 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 08:43:16 crc kubenswrapper[4741]: I0226 08:43:16.618345 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerStarted","Data":"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e"} Feb 26 08:43:16 crc kubenswrapper[4741]: I0226 08:43:16.618415 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerStarted","Data":"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c"} Feb 26 08:43:16 crc kubenswrapper[4741]: I0226 08:43:16.618433 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerStarted","Data":"d5514f789d3068f67972b656fb6674330eb2aca32e3a28b99a450e8edc3929eb"} Feb 26 08:43:16 crc kubenswrapper[4741]: I0226 08:43:16.661010 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.660977119 podStartE2EDuration="2.660977119s" podCreationTimestamp="2026-02-26 08:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:16.644949423 +0000 UTC m=+1831.640886820" watchObservedRunningTime="2026-02-26 08:43:16.660977119 +0000 UTC m=+1831.656914506" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.699262 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.705318 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.713129 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.713349 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tlszt" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.713469 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.723059 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.786037 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8mg\" (UniqueName: \"kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.786180 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.786416 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.786451 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.899987 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.900048 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.900174 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8mg\" (UniqueName: \"kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.900232 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.915089 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.915998 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.949465 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:17 crc kubenswrapper[4741]: I0226 08:43:17.961538 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8mg\" (UniqueName: \"kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg\") pod \"aodh-0\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " pod="openstack/aodh-0" Feb 26 08:43:18 crc kubenswrapper[4741]: I0226 08:43:18.032175 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:43:18 crc kubenswrapper[4741]: I0226 08:43:18.888889 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:43:19 crc kubenswrapper[4741]: I0226 08:43:19.698427 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerStarted","Data":"6f9d19dfe21498b108f314cdefc2ffe5f6caed43014f5c43760e9c1a942ca375"} Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.036399 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.036454 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.717206 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerStarted","Data":"3a212edd7d1a8bf35358313d461d312ac11dc46e618a3388012545557ea9bc3d"} Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.719860 4741 generic.go:334] "Generic (PLEG): container finished" podID="f09ad43d-1fea-4335-97aa-5428b9be77dd" containerID="91d6c32f01f092463fa2ca40792b8c9692da94955b74b493c2cbf6f85e1985f8" exitCode=0 Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.719895 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sd6r4" event={"ID":"f09ad43d-1fea-4335-97aa-5428b9be77dd","Type":"ContainerDied","Data":"91d6c32f01f092463fa2ca40792b8c9692da94955b74b493c2cbf6f85e1985f8"} Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.790500 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:43:20 crc kubenswrapper[4741]: E0226 08:43:20.790940 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.860325 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:43:20 crc kubenswrapper[4741]: I0226 08:43:20.860402 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.168896 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.388242 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.414396 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.530414 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.533177 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="dnsmasq-dns" containerID="cri-o://9b70af1d892965234405f30c436883b4682fbf49e94c4d6c5ce8e4da2cb456d7" gracePeriod=10 Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.757173 4741 generic.go:334] "Generic (PLEG): container finished" podID="a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" containerID="6835d6f1676b747eae4836ceccf97dcf45c19870031b5c5d2b045beb3f9acd1f" exitCode=0 Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.757311 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vstzs" event={"ID":"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78","Type":"ContainerDied","Data":"6835d6f1676b747eae4836ceccf97dcf45c19870031b5c5d2b045beb3f9acd1f"} Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.764796 4741 generic.go:334] "Generic (PLEG): container finished" podID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerID="9b70af1d892965234405f30c436883b4682fbf49e94c4d6c5ce8e4da2cb456d7" exitCode=0 Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.764980 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" event={"ID":"3bf01190-91b0-483f-a7fa-a05dff13c5c0","Type":"ContainerDied","Data":"9b70af1d892965234405f30c436883b4682fbf49e94c4d6c5ce8e4da2cb456d7"} Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.864724 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.945387 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:21 crc kubenswrapper[4741]: I0226 08:43:21.945404 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.788473 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.807665 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts\") pod \"f09ad43d-1fea-4335-97aa-5428b9be77dd\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.808079 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle\") pod \"f09ad43d-1fea-4335-97aa-5428b9be77dd\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.808284 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data\") pod \"f09ad43d-1fea-4335-97aa-5428b9be77dd\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.808488 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph84m\" (UniqueName: \"kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m\") pod \"f09ad43d-1fea-4335-97aa-5428b9be77dd\" (UID: \"f09ad43d-1fea-4335-97aa-5428b9be77dd\") " Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.828062 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m" (OuterVolumeSpecName: "kube-api-access-ph84m") pod "f09ad43d-1fea-4335-97aa-5428b9be77dd" (UID: "f09ad43d-1fea-4335-97aa-5428b9be77dd"). InnerVolumeSpecName "kube-api-access-ph84m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.839629 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.877897 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sd6r4" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.878165 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts" (OuterVolumeSpecName: "scripts") pod "f09ad43d-1fea-4335-97aa-5428b9be77dd" (UID: "f09ad43d-1fea-4335-97aa-5428b9be77dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.878256 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sd6r4" event={"ID":"f09ad43d-1fea-4335-97aa-5428b9be77dd","Type":"ContainerDied","Data":"894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e"} Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.878322 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894fa47bc66b64898dab657865b2b010a668f873c3caa645676c63322a620c8e" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.925854 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph84m\" (UniqueName: \"kubernetes.io/projected/f09ad43d-1fea-4335-97aa-5428b9be77dd-kube-api-access-ph84m\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.925894 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:22 crc kubenswrapper[4741]: I0226 08:43:22.978294 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f09ad43d-1fea-4335-97aa-5428b9be77dd" (UID: "f09ad43d-1fea-4335-97aa-5428b9be77dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.046350 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data" (OuterVolumeSpecName: "config-data") pod "f09ad43d-1fea-4335-97aa-5428b9be77dd" (UID: "f09ad43d-1fea-4335-97aa-5428b9be77dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.054658 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.054722 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09ad43d-1fea-4335-97aa-5428b9be77dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.171401 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.259852 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.260248 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmkm9\" (UniqueName: \"kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.260296 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.260337 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.260472 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.260772 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb\") pod \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\" (UID: \"3bf01190-91b0-483f-a7fa-a05dff13c5c0\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.276076 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9" (OuterVolumeSpecName: "kube-api-access-zmkm9") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "kube-api-access-zmkm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.378858 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmkm9\" (UniqueName: \"kubernetes.io/projected/3bf01190-91b0-483f-a7fa-a05dff13c5c0-kube-api-access-zmkm9\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.422347 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.443289 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.467099 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config" (OuterVolumeSpecName: "config") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.473793 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.474483 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3bf01190-91b0-483f-a7fa-a05dff13c5c0" (UID: "3bf01190-91b0-483f-a7fa-a05dff13c5c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.480863 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.480909 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.480923 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.480937 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.480951 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bf01190-91b0-483f-a7fa-a05dff13c5c0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.683676 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.688913 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42c5t\" (UniqueName: \"kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t\") pod \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.689050 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle\") pod \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.690006 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts\") pod \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.690068 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") pod \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.693923 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t" (OuterVolumeSpecName: "kube-api-access-42c5t") pod "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" (UID: "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78"). InnerVolumeSpecName "kube-api-access-42c5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.740485 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts" (OuterVolumeSpecName: "scripts") pod "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" (UID: "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.760318 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" (UID: "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.792701 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data" (OuterVolumeSpecName: "config-data") pod "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" (UID: "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.792967 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") pod \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\" (UID: \"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78\") " Feb 26 08:43:23 crc kubenswrapper[4741]: W0226 08:43:23.793883 4741 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78/volumes/kubernetes.io~secret/config-data Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.793903 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data" (OuterVolumeSpecName: "config-data") pod "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" (UID: "a8a4a741-4c8b-49fc-8f2d-499d5aa61f78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.795679 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42c5t\" (UniqueName: \"kubernetes.io/projected/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-kube-api-access-42c5t\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.795702 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.795712 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.795722 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.957520 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" event={"ID":"3bf01190-91b0-483f-a7fa-a05dff13c5c0","Type":"ContainerDied","Data":"6c9d6eb266d1c3cf2fe24a468878390c1c2405cd20a16ddfa7bc7ce9f0f439c0"} Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.957590 4741 scope.go:117] "RemoveContainer" containerID="9b70af1d892965234405f30c436883b4682fbf49e94c4d6c5ce8e4da2cb456d7" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.958070 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-878nl" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.987209 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vstzs" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.987222 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vstzs" event={"ID":"a8a4a741-4c8b-49fc-8f2d-499d5aa61f78","Type":"ContainerDied","Data":"0516e2f896f9e1c6c90975b19c7bf61e3afe1dc88450f5a460a88db879f2033d"} Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.987288 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0516e2f896f9e1c6c90975b19c7bf61e3afe1dc88450f5a460a88db879f2033d" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.993763 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 08:43:23 crc kubenswrapper[4741]: E0226 08:43:23.994849 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="init" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.994880 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="init" Feb 26 08:43:23 crc kubenswrapper[4741]: E0226 08:43:23.994925 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="dnsmasq-dns" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.994935 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="dnsmasq-dns" Feb 26 08:43:23 crc kubenswrapper[4741]: E0226 08:43:23.994954 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" containerName="nova-cell1-conductor-db-sync" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.994963 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" containerName="nova-cell1-conductor-db-sync" Feb 26 08:43:23 crc kubenswrapper[4741]: E0226 08:43:23.995017 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09ad43d-1fea-4335-97aa-5428b9be77dd" containerName="nova-manage" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.995027 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09ad43d-1fea-4335-97aa-5428b9be77dd" containerName="nova-manage" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.995498 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" containerName="nova-cell1-conductor-db-sync" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.995547 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09ad43d-1fea-4335-97aa-5428b9be77dd" containerName="nova-manage" Feb 26 08:43:23 crc kubenswrapper[4741]: I0226 08:43:23.995561 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" containerName="dnsmasq-dns" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:23.997288 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.002789 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.010165 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerStarted","Data":"78b10d3f71f41913730999ef9d6b01ac9f4fbb51215aeef619971b35a3108158"} Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.013334 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.013389 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5qx\" (UniqueName: \"kubernetes.io/projected/bb22d1d6-9459-4a95-b70e-38c325c092bd-kube-api-access-5t5qx\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.013735 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.073034 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.117460 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.119084 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.119143 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t5qx\" (UniqueName: \"kubernetes.io/projected/bb22d1d6-9459-4a95-b70e-38c325c092bd-kube-api-access-5t5qx\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.131262 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.131897 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22d1d6-9459-4a95-b70e-38c325c092bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.152350 4741 scope.go:117] "RemoveContainer" containerID="79568d53e01bcf164fb686f3623bc2f2942a086e6c311dfdea0c46c457ed4648" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.155164 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.171393 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t5qx\" (UniqueName: \"kubernetes.io/projected/bb22d1d6-9459-4a95-b70e-38c325c092bd-kube-api-access-5t5qx\") pod \"nova-cell1-conductor-0\" (UID: \"bb22d1d6-9459-4a95-b70e-38c325c092bd\") " pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.187545 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-878nl"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.229555 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.229939 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-log" containerID="cri-o://b58a93bbbbf7733b6039451b71783dc57a30f2730b6b341ceab0e8d245577726" gracePeriod=30 Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.232886 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-api" containerID="cri-o://75515b60f6f9e1f22d431794d8fffd31ff0c757a8642d0222b5de30b6a4e8262" gracePeriod=30 Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.274219 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.274565 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerName="nova-scheduler-scheduler" containerID="cri-o://52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" gracePeriod=30 Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.311338 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.311682 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-log" containerID="cri-o://ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c" gracePeriod=30 Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.312590 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-metadata" containerID="cri-o://fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e" gracePeriod=30 Feb 26 08:43:24 crc kubenswrapper[4741]: I0226 08:43:24.371004 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.083281 4741 generic.go:334] "Generic (PLEG): container finished" podID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerID="b58a93bbbbf7733b6039451b71783dc57a30f2730b6b341ceab0e8d245577726" exitCode=143 Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.083526 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerDied","Data":"b58a93bbbbf7733b6039451b71783dc57a30f2730b6b341ceab0e8d245577726"} Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.087629 4741 generic.go:334] "Generic (PLEG): container finished" podID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerID="ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c" exitCode=143 Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.087705 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerDied","Data":"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c"} Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.170680 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.811231 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf01190-91b0-483f-a7fa-a05dff13c5c0" path="/var/lib/kubelet/pods/3bf01190-91b0-483f-a7fa-a05dff13c5c0/volumes" Feb 26 08:43:25 crc kubenswrapper[4741]: I0226 08:43:25.895122 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.028995 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle\") pod \"862e217c-cd95-44a0-a03f-db0c23bf961b\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.029491 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q568z\" (UniqueName: \"kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z\") pod \"862e217c-cd95-44a0-a03f-db0c23bf961b\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.029653 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs\") pod \"862e217c-cd95-44a0-a03f-db0c23bf961b\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.029747 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs\") pod \"862e217c-cd95-44a0-a03f-db0c23bf961b\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.029997 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data\") pod \"862e217c-cd95-44a0-a03f-db0c23bf961b\" (UID: \"862e217c-cd95-44a0-a03f-db0c23bf961b\") " Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.035522 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs" (OuterVolumeSpecName: "logs") pod "862e217c-cd95-44a0-a03f-db0c23bf961b" (UID: "862e217c-cd95-44a0-a03f-db0c23bf961b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.047493 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z" (OuterVolumeSpecName: "kube-api-access-q568z") pod "862e217c-cd95-44a0-a03f-db0c23bf961b" (UID: "862e217c-cd95-44a0-a03f-db0c23bf961b"). InnerVolumeSpecName "kube-api-access-q568z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.121619 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data" (OuterVolumeSpecName: "config-data") pod "862e217c-cd95-44a0-a03f-db0c23bf961b" (UID: "862e217c-cd95-44a0-a03f-db0c23bf961b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.139418 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.139484 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q568z\" (UniqueName: \"kubernetes.io/projected/862e217c-cd95-44a0-a03f-db0c23bf961b-kube-api-access-q568z\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.139501 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862e217c-cd95-44a0-a03f-db0c23bf961b-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.153322 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "862e217c-cd95-44a0-a03f-db0c23bf961b" (UID: "862e217c-cd95-44a0-a03f-db0c23bf961b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.159196 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.159695 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-central-agent" containerID="cri-o://5cb6a5220aa1238c041f516674549a8a68caa8a0bb8b6ca231cfc96f2d6573c5" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.160378 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="proxy-httpd" containerID="cri-o://f1249916a0ed5a8a5b511e1e6740aaa52cfef2ce08c8cf322eeea578de2d8308" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.160555 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-notification-agent" containerID="cri-o://81843d81d7bd0783df7e81cfce00bb053eee8674563b77cdd8939046c29e42a8" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.160619 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="sg-core" containerID="cri-o://4b83ca6e2e571b2ce0ebe921939504468f2deeb85528497e7869bb22bbc29685" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.188339 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.189665 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bb22d1d6-9459-4a95-b70e-38c325c092bd","Type":"ContainerStarted","Data":"cc10afa557f52a50e80a14c8145fee2575f1cf6dd9ab1c18faf090183c728a1e"} Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.189760 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bb22d1d6-9459-4a95-b70e-38c325c092bd","Type":"ContainerStarted","Data":"e963ce74f021ba6b43d920b1c87020eb3e67dd65e5a21fd0e0356caaffededd5"} Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.195977 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.203487 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "862e217c-cd95-44a0-a03f-db0c23bf961b" (UID: "862e217c-cd95-44a0-a03f-db0c23bf961b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.220343 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.240779 4741 generic.go:334] "Generic (PLEG): container finished" podID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerID="fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e" exitCode=0 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.240863 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerDied","Data":"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e"} Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.240903 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862e217c-cd95-44a0-a03f-db0c23bf961b","Type":"ContainerDied","Data":"d5514f789d3068f67972b656fb6674330eb2aca32e3a28b99a450e8edc3929eb"} Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.240928 4741 scope.go:117] "RemoveContainer" containerID="fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.242693 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.243053 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.245036 4741 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862e217c-cd95-44a0-a03f-db0c23bf961b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.267760 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.267855 4741 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerName="nova-scheduler-scheduler" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.294437 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.295857 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" containerName="kube-state-metrics" containerID="cri-o://f09a6890b605d00eda41015a682a9896e7936efdd60e9de08b6e772fc7494f14" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.324537 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.325727 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="2f7c78eb-9208-46a4-a062-0bb536df511d" containerName="mysqld-exporter" containerID="cri-o://105b02ef743e46b98835604bb69caaae59a63350fcaa5f62b6d2b022065ec3b7" gracePeriod=30 Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.332189 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.332158669 podStartE2EDuration="3.332158669s" podCreationTimestamp="2026-02-26 08:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:26.238190398 +0000 UTC m=+1841.234127805" watchObservedRunningTime="2026-02-26 08:43:26.332158669 +0000 UTC m=+1841.328096046" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.409237 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.487211 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.532216 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.533098 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-metadata" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.533141 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-metadata" Feb 26 08:43:26 crc kubenswrapper[4741]: E0226 08:43:26.533203 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-log" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.533214 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-log" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.533489 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-log" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.533513 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" containerName="nova-metadata-metadata" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.535150 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.543294 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.543616 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.545950 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.670692 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.670969 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.671092 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.671131 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv24z\" (UniqueName: \"kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.671371 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.773817 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.773926 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.773982 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.773999 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv24z\" (UniqueName: \"kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.774820 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.775499 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.779372 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.779677 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.780573 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.793935 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv24z\" (UniqueName: \"kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z\") pod \"nova-metadata-0\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " pod="openstack/nova-metadata-0" Feb 26 08:43:26 crc kubenswrapper[4741]: I0226 08:43:26.882590 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.282589 4741 generic.go:334] "Generic (PLEG): container finished" podID="90779734-9d35-47b9-ac0b-dbf02e3453a5" containerID="f09a6890b605d00eda41015a682a9896e7936efdd60e9de08b6e772fc7494f14" exitCode=2 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.282912 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90779734-9d35-47b9-ac0b-dbf02e3453a5","Type":"ContainerDied","Data":"f09a6890b605d00eda41015a682a9896e7936efdd60e9de08b6e772fc7494f14"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.304007 4741 generic.go:334] "Generic (PLEG): container finished" podID="2f7c78eb-9208-46a4-a062-0bb536df511d" containerID="105b02ef743e46b98835604bb69caaae59a63350fcaa5f62b6d2b022065ec3b7" exitCode=2 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.304097 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2f7c78eb-9208-46a4-a062-0bb536df511d","Type":"ContainerDied","Data":"105b02ef743e46b98835604bb69caaae59a63350fcaa5f62b6d2b022065ec3b7"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.306557 4741 generic.go:334] "Generic (PLEG): container finished" podID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerID="52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" exitCode=0 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.306635 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05be8a42-88d6-4e4d-8ed5-0a8281655c9d","Type":"ContainerDied","Data":"52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.310098 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerID="f1249916a0ed5a8a5b511e1e6740aaa52cfef2ce08c8cf322eeea578de2d8308" exitCode=0 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.310136 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerID="4b83ca6e2e571b2ce0ebe921939504468f2deeb85528497e7869bb22bbc29685" exitCode=2 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.310144 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerID="5cb6a5220aa1238c041f516674549a8a68caa8a0bb8b6ca231cfc96f2d6573c5" exitCode=0 Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.314066 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerDied","Data":"f1249916a0ed5a8a5b511e1e6740aaa52cfef2ce08c8cf322eeea578de2d8308"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.314127 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerDied","Data":"4b83ca6e2e571b2ce0ebe921939504468f2deeb85528497e7869bb22bbc29685"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.314145 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerDied","Data":"5cb6a5220aa1238c041f516674549a8a68caa8a0bb8b6ca231cfc96f2d6573c5"} Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.319756 4741 scope.go:117] "RemoveContainer" containerID="ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.514701 4741 scope.go:117] "RemoveContainer" containerID="fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e" Feb 26 08:43:27 crc kubenswrapper[4741]: E0226 08:43:27.515854 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e\": container with ID starting with fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e not found: ID does not exist" containerID="fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.515893 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e"} err="failed to get container status \"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e\": rpc error: code = NotFound desc = could not find container \"fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e\": container with ID starting with fb5d5705b9263c88e71af6b0b6675219522d86ec64a67795f34b87c38465b17e not found: ID does not exist" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.515915 4741 scope.go:117] "RemoveContainer" containerID="ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c" Feb 26 08:43:27 crc kubenswrapper[4741]: E0226 08:43:27.516677 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c\": container with ID starting with ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c not found: ID does not exist" containerID="ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.516701 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c"} err="failed to get container status \"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c\": rpc error: code = NotFound desc = could not find container \"ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c\": container with ID starting with ba130917b46da08449bc2793304eea51db7837c70034190a5c8a2b7ac61c6f1c not found: ID does not exist" Feb 26 08:43:27 crc kubenswrapper[4741]: I0226 08:43:27.816966 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862e217c-cd95-44a0-a03f-db0c23bf961b" path="/var/lib/kubelet/pods/862e217c-cd95-44a0-a03f-db0c23bf961b/volumes" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.073673 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.262631 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle\") pod \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.262828 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data\") pod \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.263032 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvvdb\" (UniqueName: \"kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb\") pod \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\" (UID: \"05be8a42-88d6-4e4d-8ed5-0a8281655c9d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.305562 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb" (OuterVolumeSpecName: "kube-api-access-dvvdb") pod "05be8a42-88d6-4e4d-8ed5-0a8281655c9d" (UID: "05be8a42-88d6-4e4d-8ed5-0a8281655c9d"). InnerVolumeSpecName "kube-api-access-dvvdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.367881 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvvdb\" (UniqueName: \"kubernetes.io/projected/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-kube-api-access-dvvdb\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.382973 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data" (OuterVolumeSpecName: "config-data") pod "05be8a42-88d6-4e4d-8ed5-0a8281655c9d" (UID: "05be8a42-88d6-4e4d-8ed5-0a8281655c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.415562 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05be8a42-88d6-4e4d-8ed5-0a8281655c9d" (UID: "05be8a42-88d6-4e4d-8ed5-0a8281655c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.450557 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05be8a42-88d6-4e4d-8ed5-0a8281655c9d","Type":"ContainerDied","Data":"3a460db19b21645e2ca82757e282866a43cda9dbee92edca65744ccf5dda0ce0"} Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.450629 4741 scope.go:117] "RemoveContainer" containerID="52e2ec788d126d1514fabce7a558907c12f78e8f5fe455daf862fc816f1b82cb" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.450794 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.472249 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.472303 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05be8a42-88d6-4e4d-8ed5-0a8281655c9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.475329 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerStarted","Data":"4dd9b38796bb9d950cd1d723463f9847b72b77aa6e43972a231c73f1138d3181"} Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.597216 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.634028 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.635451 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.642934 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.666520 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:28 crc kubenswrapper[4741]: E0226 08:43:28.672644 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7c78eb-9208-46a4-a062-0bb536df511d" containerName="mysqld-exporter" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.672688 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7c78eb-9208-46a4-a062-0bb536df511d" containerName="mysqld-exporter" Feb 26 08:43:28 crc kubenswrapper[4741]: E0226 08:43:28.672742 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerName="nova-scheduler-scheduler" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.672748 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerName="nova-scheduler-scheduler" Feb 26 08:43:28 crc kubenswrapper[4741]: E0226 08:43:28.672771 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" containerName="kube-state-metrics" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.672778 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" containerName="kube-state-metrics" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.673049 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" containerName="nova-scheduler-scheduler" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.673093 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" containerName="kube-state-metrics" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.673122 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7c78eb-9208-46a4-a062-0bb536df511d" containerName="mysqld-exporter" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.674254 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.688315 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.719268 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.844365 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfh68\" (UniqueName: \"kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68\") pod \"90779734-9d35-47b9-ac0b-dbf02e3453a5\" (UID: \"90779734-9d35-47b9-ac0b-dbf02e3453a5\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.844435 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle\") pod \"2f7c78eb-9208-46a4-a062-0bb536df511d\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.844475 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data\") pod \"2f7c78eb-9208-46a4-a062-0bb536df511d\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.844570 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm9bd\" (UniqueName: \"kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd\") pod \"2f7c78eb-9208-46a4-a062-0bb536df511d\" (UID: \"2f7c78eb-9208-46a4-a062-0bb536df511d\") " Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.880399 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.881099 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2kvr\" (UniqueName: \"kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.882215 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.886988 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68" (OuterVolumeSpecName: "kube-api-access-jfh68") pod "90779734-9d35-47b9-ac0b-dbf02e3453a5" (UID: "90779734-9d35-47b9-ac0b-dbf02e3453a5"). InnerVolumeSpecName "kube-api-access-jfh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.887826 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.888468 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfh68\" (UniqueName: \"kubernetes.io/projected/90779734-9d35-47b9-ac0b-dbf02e3453a5-kube-api-access-jfh68\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.889377 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd" (OuterVolumeSpecName: "kube-api-access-fm9bd") pod "2f7c78eb-9208-46a4-a062-0bb536df511d" (UID: "2f7c78eb-9208-46a4-a062-0bb536df511d"). InnerVolumeSpecName "kube-api-access-fm9bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.973284 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data" (OuterVolumeSpecName: "config-data") pod "2f7c78eb-9208-46a4-a062-0bb536df511d" (UID: "2f7c78eb-9208-46a4-a062-0bb536df511d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.997771 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.997920 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.998169 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2kvr\" (UniqueName: \"kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.998736 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:28 crc kubenswrapper[4741]: I0226 08:43:28.998759 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm9bd\" (UniqueName: \"kubernetes.io/projected/2f7c78eb-9208-46a4-a062-0bb536df511d-kube-api-access-fm9bd\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.017051 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.017272 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.036590 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2kvr\" (UniqueName: \"kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr\") pod \"nova-scheduler-0\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " pod="openstack/nova-scheduler-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.052549 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f7c78eb-9208-46a4-a062-0bb536df511d" (UID: "2f7c78eb-9208-46a4-a062-0bb536df511d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.103784 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7c78eb-9208-46a4-a062-0bb536df511d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.338083 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.572826 4741 generic.go:334] "Generic (PLEG): container finished" podID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerID="81843d81d7bd0783df7e81cfce00bb053eee8674563b77cdd8939046c29e42a8" exitCode=0 Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.573063 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerDied","Data":"81843d81d7bd0783df7e81cfce00bb053eee8674563b77cdd8939046c29e42a8"} Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.598409 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.599777 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90779734-9d35-47b9-ac0b-dbf02e3453a5","Type":"ContainerDied","Data":"1544e4551c264026fd5032717ea9256d572bd947af9fbda87ccfe1aeb3af9af5"} Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.599872 4741 scope.go:117] "RemoveContainer" containerID="f09a6890b605d00eda41015a682a9896e7936efdd60e9de08b6e772fc7494f14" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.610717 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerStarted","Data":"445c74bf98826c26e2dee795c1ea3bf40f32b5c43ff7efc10a0393a9e2444451"} Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.610789 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerStarted","Data":"48c6f9f6cb1575417168de80f8ae444b07dd01f5cad4129c569bc055d0cb65b9"} Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.635242 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.637858 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2f7c78eb-9208-46a4-a062-0bb536df511d","Type":"ContainerDied","Data":"edb0f1c585b259a8dcb0f02b69ce0931ba03bc81e9a5016215fe30d309fa5bdf"} Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.665706 4741 scope.go:117] "RemoveContainer" containerID="105b02ef743e46b98835604bb69caaae59a63350fcaa5f62b6d2b022065ec3b7" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.694178 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.852703 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05be8a42-88d6-4e4d-8ed5-0a8281655c9d" path="/var/lib/kubelet/pods/05be8a42-88d6-4e4d-8ed5-0a8281655c9d/volumes" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.859984 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.864365 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.870820 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.883837 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.884334 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.888186 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.935896 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.960388 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.985591 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:29 crc kubenswrapper[4741]: I0226 08:43:29.995985 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:29.998470 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.005448 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.006017 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.039316 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.039496 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l269t\" (UniqueName: \"kubernetes.io/projected/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-api-access-l269t\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.039565 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.039958 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.131842 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145252 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145321 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f28c\" (UniqueName: \"kubernetes.io/projected/5c1caae2-0233-4346-afc0-4729c5e567b0-kube-api-access-5f28c\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145348 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-config-data\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145639 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145775 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l269t\" (UniqueName: \"kubernetes.io/projected/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-api-access-l269t\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.145822 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.146013 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.163432 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.180473 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.181838 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.183414 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.183730 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.187585 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l269t\" (UniqueName: \"kubernetes.io/projected/8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67-kube-api-access-l269t\") pod \"kube-state-metrics-0\" (UID: \"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67\") " pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.265825 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpz2x\" (UniqueName: \"kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.265960 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.265997 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.266139 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.266217 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.266479 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.266512 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd\") pod \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\" (UID: \"cc6d66c8-28cb-499d-a09e-cefaf8f2020b\") " Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.267202 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.267330 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.267357 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f28c\" (UniqueName: \"kubernetes.io/projected/5c1caae2-0233-4346-afc0-4729c5e567b0-kube-api-access-5f28c\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.267378 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-config-data\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.268783 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.269746 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.275482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x" (OuterVolumeSpecName: "kube-api-access-kpz2x") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "kube-api-access-kpz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.276004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.277175 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.280218 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1caae2-0233-4346-afc0-4729c5e567b0-config-data\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.281990 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts" (OuterVolumeSpecName: "scripts") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.296777 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f28c\" (UniqueName: \"kubernetes.io/projected/5c1caae2-0233-4346-afc0-4729c5e567b0-kube-api-access-5f28c\") pod \"mysqld-exporter-0\" (UID: \"5c1caae2-0233-4346-afc0-4729c5e567b0\") " pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.327463 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.371138 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.371192 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.371205 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.371217 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.371228 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpz2x\" (UniqueName: \"kubernetes.io/projected/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-kube-api-access-kpz2x\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.397174 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.421758 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.448153 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data" (OuterVolumeSpecName: "config-data") pod "cc6d66c8-28cb-499d-a09e-cefaf8f2020b" (UID: "cc6d66c8-28cb-499d-a09e-cefaf8f2020b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.475160 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.475207 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d66c8-28cb-499d-a09e-cefaf8f2020b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.556183 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.667072 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2e94a2-7f27-4a01-a33f-11320fb2a81d","Type":"ContainerStarted","Data":"4a92d6b9b7b3932edfed1360763d8349049c65fb1b7b8b095994f57f7f8ade90"} Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.683678 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerStarted","Data":"26392209e63d2b40031162baae2501b93f73176c4b2f6a7c8917da1a47404504"} Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.700157 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6d66c8-28cb-499d-a09e-cefaf8f2020b","Type":"ContainerDied","Data":"d29ccaa91555ccadb81853488e7e49e5023eb22efad311703aa2569b6737cc52"} Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.700267 4741 scope.go:117] "RemoveContainer" containerID="f1249916a0ed5a8a5b511e1e6740aaa52cfef2ce08c8cf322eeea578de2d8308" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.700603 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.774544 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.774508916 podStartE2EDuration="4.774508916s" podCreationTimestamp="2026-02-26 08:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:30.704812414 +0000 UTC m=+1845.700749811" watchObservedRunningTime="2026-02-26 08:43:30.774508916 +0000 UTC m=+1845.770446293" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.850973 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.860609 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.860878 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.907206 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.927630 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:30 crc kubenswrapper[4741]: E0226 08:43:30.928528 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-central-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928558 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-central-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: E0226 08:43:30.928578 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-notification-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928587 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-notification-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: E0226 08:43:30.928615 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="sg-core" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928622 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="sg-core" Feb 26 08:43:30 crc kubenswrapper[4741]: E0226 08:43:30.928645 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="proxy-httpd" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928654 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="proxy-httpd" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928909 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="proxy-httpd" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928930 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-central-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928956 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="ceilometer-notification-agent" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.928973 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" containerName="sg-core" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.931634 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.936331 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.938037 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.940515 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 08:43:30 crc kubenswrapper[4741]: I0226 08:43:30.963392 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.007080 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.007698 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.008057 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttnt\" (UniqueName: \"kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.008304 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.008427 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.008622 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.008945 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.009413 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115534 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115692 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115742 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115786 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115819 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttnt\" (UniqueName: \"kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115858 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.115898 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.118069 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.118445 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.128243 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.129599 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.133468 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.135154 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.137185 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.156426 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttnt\" (UniqueName: \"kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt\") pod \"ceilometer-0\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.270575 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.368954 4741 scope.go:117] "RemoveContainer" containerID="4b83ca6e2e571b2ce0ebe921939504468f2deeb85528497e7869bb22bbc29685" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.627350 4741 scope.go:117] "RemoveContainer" containerID="81843d81d7bd0783df7e81cfce00bb053eee8674563b77cdd8939046c29e42a8" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.757638 4741 generic.go:334] "Generic (PLEG): container finished" podID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerID="75515b60f6f9e1f22d431794d8fffd31ff0c757a8642d0222b5de30b6a4e8262" exitCode=0 Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.758207 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerDied","Data":"75515b60f6f9e1f22d431794d8fffd31ff0c757a8642d0222b5de30b6a4e8262"} Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.879489 4741 scope.go:117] "RemoveContainer" containerID="5cb6a5220aa1238c041f516674549a8a68caa8a0bb8b6ca231cfc96f2d6573c5" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.888141 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7c78eb-9208-46a4-a062-0bb536df511d" path="/var/lib/kubelet/pods/2f7c78eb-9208-46a4-a062-0bb536df511d/volumes" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.888879 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90779734-9d35-47b9-ac0b-dbf02e3453a5" path="/var/lib/kubelet/pods/90779734-9d35-47b9-ac0b-dbf02e3453a5/volumes" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.889617 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6d66c8-28cb-499d-a09e-cefaf8f2020b" path="/var/lib/kubelet/pods/cc6d66c8-28cb-499d-a09e-cefaf8f2020b/volumes" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.892808 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:43:31 crc kubenswrapper[4741]: I0226 08:43:31.892874 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.191594 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.289787 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data\") pod \"0d80333c-99df-466b-bb44-6fd9177b60ab\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.290024 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8knch\" (UniqueName: \"kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch\") pod \"0d80333c-99df-466b-bb44-6fd9177b60ab\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.290190 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs\") pod \"0d80333c-99df-466b-bb44-6fd9177b60ab\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.290240 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle\") pod \"0d80333c-99df-466b-bb44-6fd9177b60ab\" (UID: \"0d80333c-99df-466b-bb44-6fd9177b60ab\") " Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.290966 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs" (OuterVolumeSpecName: "logs") pod "0d80333c-99df-466b-bb44-6fd9177b60ab" (UID: "0d80333c-99df-466b-bb44-6fd9177b60ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.301531 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch" (OuterVolumeSpecName: "kube-api-access-8knch") pod "0d80333c-99df-466b-bb44-6fd9177b60ab" (UID: "0d80333c-99df-466b-bb44-6fd9177b60ab"). InnerVolumeSpecName "kube-api-access-8knch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.322692 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.332738 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data" (OuterVolumeSpecName: "config-data") pod "0d80333c-99df-466b-bb44-6fd9177b60ab" (UID: "0d80333c-99df-466b-bb44-6fd9177b60ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:32 crc kubenswrapper[4741]: W0226 08:43:32.348015 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c2cbd6a_faeb_4fca_97e8_8d474ffbbe67.slice/crio-1549c77043a4f4efb7057393b8fd51c6cc102b27b7178ca75ecc0e050122790e WatchSource:0}: Error finding container 1549c77043a4f4efb7057393b8fd51c6cc102b27b7178ca75ecc0e050122790e: Status 404 returned error can't find the container with id 1549c77043a4f4efb7057393b8fd51c6cc102b27b7178ca75ecc0e050122790e Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.356059 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d80333c-99df-466b-bb44-6fd9177b60ab" (UID: "0d80333c-99df-466b-bb44-6fd9177b60ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.395231 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.395712 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8knch\" (UniqueName: \"kubernetes.io/projected/0d80333c-99df-466b-bb44-6fd9177b60ab-kube-api-access-8knch\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.395729 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d80333c-99df-466b-bb44-6fd9177b60ab-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.395744 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d80333c-99df-466b-bb44-6fd9177b60ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.479997 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 08:43:32 crc kubenswrapper[4741]: W0226 08:43:32.578781 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb586ac8_cf89_4499_81bf_8798eb5dbdb9.slice/crio-9c3c6b761c4a12da40a6b0204886700fb4a01fa94aa82262bb1aee7ace2ff8a5 WatchSource:0}: Error finding container 9c3c6b761c4a12da40a6b0204886700fb4a01fa94aa82262bb1aee7ace2ff8a5: Status 404 returned error can't find the container with id 9c3c6b761c4a12da40a6b0204886700fb4a01fa94aa82262bb1aee7ace2ff8a5 Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.580964 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.778226 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerStarted","Data":"9c3c6b761c4a12da40a6b0204886700fb4a01fa94aa82262bb1aee7ace2ff8a5"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.780182 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67","Type":"ContainerStarted","Data":"1549c77043a4f4efb7057393b8fd51c6cc102b27b7178ca75ecc0e050122790e"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.782551 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5c1caae2-0233-4346-afc0-4729c5e567b0","Type":"ContainerStarted","Data":"9b116c718d4ed108d8e73da15999b35ca7aee0e045bdf09b249835286caf028f"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.785043 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d80333c-99df-466b-bb44-6fd9177b60ab","Type":"ContainerDied","Data":"6847e27710d40a2659a175c018039b2182648d33ecec42c16b06246b16a99753"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.785070 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.785519 4741 scope.go:117] "RemoveContainer" containerID="75515b60f6f9e1f22d431794d8fffd31ff0c757a8642d0222b5de30b6a4e8262" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.787900 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2e94a2-7f27-4a01-a33f-11320fb2a81d","Type":"ContainerStarted","Data":"e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.800043 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-api" containerID="cri-o://3a212edd7d1a8bf35358313d461d312ac11dc46e618a3388012545557ea9bc3d" gracePeriod=30 Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.800369 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-listener" containerID="cri-o://b09e9d16a754b4ae8e75853f2b6418814758d162a259ecfcb734ecfff2d66ed9" gracePeriod=30 Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.800447 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-notifier" containerID="cri-o://4dd9b38796bb9d950cd1d723463f9847b72b77aa6e43972a231c73f1138d3181" gracePeriod=30 Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.800490 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-evaluator" containerID="cri-o://78b10d3f71f41913730999ef9d6b01ac9f4fbb51215aeef619971b35a3108158" gracePeriod=30 Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.800542 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerStarted","Data":"b09e9d16a754b4ae8e75853f2b6418814758d162a259ecfcb734ecfff2d66ed9"} Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.834892 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.8348500869999995 podStartE2EDuration="4.834850087s" podCreationTimestamp="2026-02-26 08:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:32.833095297 +0000 UTC m=+1847.829032694" watchObservedRunningTime="2026-02-26 08:43:32.834850087 +0000 UTC m=+1847.830787474" Feb 26 08:43:32 crc kubenswrapper[4741]: I0226 08:43:32.885979 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.2983179590000002 podStartE2EDuration="15.885955469s" podCreationTimestamp="2026-02-26 08:43:17 +0000 UTC" firstStartedPulling="2026-02-26 08:43:18.893568906 +0000 UTC m=+1833.889506293" lastFinishedPulling="2026-02-26 08:43:31.481206416 +0000 UTC m=+1846.477143803" observedRunningTime="2026-02-26 08:43:32.872336992 +0000 UTC m=+1847.868274379" watchObservedRunningTime="2026-02-26 08:43:32.885955469 +0000 UTC m=+1847.881892856" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.036976 4741 scope.go:117] "RemoveContainer" containerID="b58a93bbbbf7733b6039451b71783dc57a30f2730b6b341ceab0e8d245577726" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.089714 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.231873 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.279219 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:33 crc kubenswrapper[4741]: E0226 08:43:33.280368 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-log" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.280394 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-log" Feb 26 08:43:33 crc kubenswrapper[4741]: E0226 08:43:33.280426 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-api" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.280434 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-api" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.280727 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-api" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.280772 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" containerName="nova-api-log" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.282355 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.288503 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.295738 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.425668 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.426527 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.426711 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84cw\" (UniqueName: \"kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.426864 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.529450 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84cw\" (UniqueName: \"kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.529557 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.529764 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.529896 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.530422 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.539392 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.541788 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.553608 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84cw\" (UniqueName: \"kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw\") pod \"nova-api-0\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.704939 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.788173 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:43:33 crc kubenswrapper[4741]: E0226 08:43:33.789132 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.812517 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d80333c-99df-466b-bb44-6fd9177b60ab" path="/var/lib/kubelet/pods/0d80333c-99df-466b-bb44-6fd9177b60ab/volumes" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.829178 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerStarted","Data":"0df4530d23839a5fc0b811aae6d533be5478bfd93da57fa77ffcfecbac934286"} Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.831479 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67","Type":"ContainerStarted","Data":"61dc2c1c1829f66db8a1331740cbdef1e2ae7a804a94c03138b6e91baed5cbda"} Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.833797 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.836259 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5c1caae2-0233-4346-afc0-4729c5e567b0","Type":"ContainerStarted","Data":"e2c23615ae849569b525412c3421ed910bfa2234a7694a3da810d521482fe723"} Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.864217 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.469689954 podStartE2EDuration="4.864179858s" podCreationTimestamp="2026-02-26 08:43:29 +0000 UTC" firstStartedPulling="2026-02-26 08:43:32.359224896 +0000 UTC m=+1847.355162283" lastFinishedPulling="2026-02-26 08:43:32.7537148 +0000 UTC m=+1847.749652187" observedRunningTime="2026-02-26 08:43:33.862869101 +0000 UTC m=+1848.858806488" watchObservedRunningTime="2026-02-26 08:43:33.864179858 +0000 UTC m=+1848.860117255" Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.920323 4741 generic.go:334] "Generic (PLEG): container finished" podID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerID="4dd9b38796bb9d950cd1d723463f9847b72b77aa6e43972a231c73f1138d3181" exitCode=0 Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.920397 4741 generic.go:334] "Generic (PLEG): container finished" podID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerID="78b10d3f71f41913730999ef9d6b01ac9f4fbb51215aeef619971b35a3108158" exitCode=0 Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.920415 4741 generic.go:334] "Generic (PLEG): container finished" podID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerID="3a212edd7d1a8bf35358313d461d312ac11dc46e618a3388012545557ea9bc3d" exitCode=0 Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.920976 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerDied","Data":"4dd9b38796bb9d950cd1d723463f9847b72b77aa6e43972a231c73f1138d3181"} Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.921057 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerDied","Data":"78b10d3f71f41913730999ef9d6b01ac9f4fbb51215aeef619971b35a3108158"} Feb 26 08:43:33 crc kubenswrapper[4741]: I0226 08:43:33.921071 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerDied","Data":"3a212edd7d1a8bf35358313d461d312ac11dc46e618a3388012545557ea9bc3d"} Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.341592 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.413561 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.745860466 podStartE2EDuration="5.413536076s" podCreationTimestamp="2026-02-26 08:43:29 +0000 UTC" firstStartedPulling="2026-02-26 08:43:32.481091621 +0000 UTC m=+1847.477029008" lastFinishedPulling="2026-02-26 08:43:33.148767231 +0000 UTC m=+1848.144704618" observedRunningTime="2026-02-26 08:43:33.917186095 +0000 UTC m=+1848.913123492" watchObservedRunningTime="2026-02-26 08:43:34.413536076 +0000 UTC m=+1849.409473463" Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.420916 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.421142 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.939701 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerStarted","Data":"1535b1c95ebb92fc44cbd410cdcb1b0c9a1cfd4ce837834ab94f0a824df051b8"} Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.941193 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerStarted","Data":"795ddee3e848625bb60f7b4fd13e56c47feacab747f7dd82f9d677bcff49794b"} Feb 26 08:43:34 crc kubenswrapper[4741]: I0226 08:43:34.961786 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerStarted","Data":"a1ecd9594d63c8baf4bb2ab4df3025ea093d27981f0be7e4aeda66fbfa6923cc"} Feb 26 08:43:35 crc kubenswrapper[4741]: I0226 08:43:35.984796 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerStarted","Data":"f6247c8fe21597c6a5eab33584f7f3fe2d593d03a7a304851acb72eb9ed4ebd4"} Feb 26 08:43:35 crc kubenswrapper[4741]: I0226 08:43:35.991502 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerStarted","Data":"43f68a3507f7762e3c76f5606a43d8471942c401269b4946fd1c93e130aa6e1b"} Feb 26 08:43:36 crc kubenswrapper[4741]: I0226 08:43:36.011527 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.011502023 podStartE2EDuration="3.011502023s" podCreationTimestamp="2026-02-26 08:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:36.009492176 +0000 UTC m=+1851.005429583" watchObservedRunningTime="2026-02-26 08:43:36.011502023 +0000 UTC m=+1851.007439410" Feb 26 08:43:36 crc kubenswrapper[4741]: I0226 08:43:36.882849 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 08:43:36 crc kubenswrapper[4741]: I0226 08:43:36.882924 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 08:43:37 crc kubenswrapper[4741]: I0226 08:43:37.902361 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.5:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:37 crc kubenswrapper[4741]: I0226 08:43:37.902410 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.5:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:38 crc kubenswrapper[4741]: I0226 08:43:38.028967 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerStarted","Data":"8f2a5a877667631f27652398b359c57386cfc3009e460a62efb8874a50da4aac"} Feb 26 08:43:38 crc kubenswrapper[4741]: I0226 08:43:38.029185 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:43:38 crc kubenswrapper[4741]: I0226 08:43:38.073650 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.71081872 podStartE2EDuration="8.073623254s" podCreationTimestamp="2026-02-26 08:43:30 +0000 UTC" firstStartedPulling="2026-02-26 08:43:32.581845785 +0000 UTC m=+1847.577783162" lastFinishedPulling="2026-02-26 08:43:36.944650299 +0000 UTC m=+1851.940587696" observedRunningTime="2026-02-26 08:43:38.05978155 +0000 UTC m=+1853.055718937" watchObservedRunningTime="2026-02-26 08:43:38.073623254 +0000 UTC m=+1853.069560641" Feb 26 08:43:39 crc kubenswrapper[4741]: I0226 08:43:39.341511 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 08:43:39 crc kubenswrapper[4741]: I0226 08:43:39.390003 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 08:43:40 crc kubenswrapper[4741]: I0226 08:43:40.095920 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 08:43:40 crc kubenswrapper[4741]: I0226 08:43:40.439684 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.078590 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.097572 4741 generic.go:334] "Generic (PLEG): container finished" podID="f7c12564-44ab-408c-b2ed-2466bff3274d" containerID="29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff" exitCode=137 Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.097678 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7c12564-44ab-408c-b2ed-2466bff3274d","Type":"ContainerDied","Data":"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff"} Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.097802 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7c12564-44ab-408c-b2ed-2466bff3274d","Type":"ContainerDied","Data":"6521e2072922f79b1f3728183b54edd24fa8b3295eef416b62f33176363fd9fb"} Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.097836 4741 scope.go:117] "RemoveContainer" containerID="29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.098157 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.137740 4741 scope.go:117] "RemoveContainer" containerID="29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff" Feb 26 08:43:43 crc kubenswrapper[4741]: E0226 08:43:43.138598 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff\": container with ID starting with 29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff not found: ID does not exist" containerID="29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.138649 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff"} err="failed to get container status \"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff\": rpc error: code = NotFound desc = could not find container \"29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff\": container with ID starting with 29dd4e78c1c29ffab0dc8c1e091f821fc77fbf7b04e670f69f3867d1bf4543ff not found: ID does not exist" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.166996 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hddd\" (UniqueName: \"kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd\") pod \"f7c12564-44ab-408c-b2ed-2466bff3274d\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.167158 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data\") pod \"f7c12564-44ab-408c-b2ed-2466bff3274d\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.167263 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle\") pod \"f7c12564-44ab-408c-b2ed-2466bff3274d\" (UID: \"f7c12564-44ab-408c-b2ed-2466bff3274d\") " Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.174413 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd" (OuterVolumeSpecName: "kube-api-access-2hddd") pod "f7c12564-44ab-408c-b2ed-2466bff3274d" (UID: "f7c12564-44ab-408c-b2ed-2466bff3274d"). InnerVolumeSpecName "kube-api-access-2hddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.220337 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data" (OuterVolumeSpecName: "config-data") pod "f7c12564-44ab-408c-b2ed-2466bff3274d" (UID: "f7c12564-44ab-408c-b2ed-2466bff3274d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.237638 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7c12564-44ab-408c-b2ed-2466bff3274d" (UID: "f7c12564-44ab-408c-b2ed-2466bff3274d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.271328 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.271386 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c12564-44ab-408c-b2ed-2466bff3274d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.271404 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hddd\" (UniqueName: \"kubernetes.io/projected/f7c12564-44ab-408c-b2ed-2466bff3274d-kube-api-access-2hddd\") on node \"crc\" DevicePath \"\"" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.449186 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.484214 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.498198 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:43 crc kubenswrapper[4741]: E0226 08:43:43.498965 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c12564-44ab-408c-b2ed-2466bff3274d" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.498989 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c12564-44ab-408c-b2ed-2466bff3274d" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.499330 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c12564-44ab-408c-b2ed-2466bff3274d" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.500453 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.504762 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.505279 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.505470 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.511926 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.583402 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sf2\" (UniqueName: \"kubernetes.io/projected/078be97e-5b33-4a37-9c43-ffb13c9144e7-kube-api-access-r2sf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.583918 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.584226 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.584404 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.584591 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.687856 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.688039 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sf2\" (UniqueName: \"kubernetes.io/projected/078be97e-5b33-4a37-9c43-ffb13c9144e7-kube-api-access-r2sf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.688157 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.688247 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.688308 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.692153 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.692287 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.698819 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.700698 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/078be97e-5b33-4a37-9c43-ffb13c9144e7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.706674 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.706731 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.709075 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sf2\" (UniqueName: \"kubernetes.io/projected/078be97e-5b33-4a37-9c43-ffb13c9144e7-kube-api-access-r2sf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"078be97e-5b33-4a37-9c43-ffb13c9144e7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.806038 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c12564-44ab-408c-b2ed-2466bff3274d" path="/var/lib/kubelet/pods/f7c12564-44ab-408c-b2ed-2466bff3274d/volumes" Feb 26 08:43:43 crc kubenswrapper[4741]: I0226 08:43:43.834005 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:44 crc kubenswrapper[4741]: I0226 08:43:44.403293 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 08:43:44 crc kubenswrapper[4741]: I0226 08:43:44.788373 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.10:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:44 crc kubenswrapper[4741]: I0226 08:43:44.788797 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.10:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 08:43:45 crc kubenswrapper[4741]: I0226 08:43:45.160656 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078be97e-5b33-4a37-9c43-ffb13c9144e7","Type":"ContainerStarted","Data":"5ae2cb0ddad7f258dedd55758f5c3c1017a2e39f4464c76b61ac68a505320c6f"} Feb 26 08:43:45 crc kubenswrapper[4741]: I0226 08:43:45.160711 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078be97e-5b33-4a37-9c43-ffb13c9144e7","Type":"ContainerStarted","Data":"ca493baea44599bc243ebfcff270b50979a8d89f095be29ccb68825ee79a2f86"} Feb 26 08:43:45 crc kubenswrapper[4741]: I0226 08:43:45.189958 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.189935985 podStartE2EDuration="2.189935985s" podCreationTimestamp="2026-02-26 08:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:45.18307237 +0000 UTC m=+1860.179009757" watchObservedRunningTime="2026-02-26 08:43:45.189935985 +0000 UTC m=+1860.185873372" Feb 26 08:43:46 crc kubenswrapper[4741]: I0226 08:43:46.889836 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 08:43:46 crc kubenswrapper[4741]: I0226 08:43:46.893054 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 08:43:46 crc kubenswrapper[4741]: I0226 08:43:46.899724 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 08:43:47 crc kubenswrapper[4741]: I0226 08:43:47.197740 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 08:43:47 crc kubenswrapper[4741]: I0226 08:43:47.788463 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:43:47 crc kubenswrapper[4741]: E0226 08:43:47.788800 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:43:48 crc kubenswrapper[4741]: I0226 08:43:48.835024 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.711489 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.712665 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.712992 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.717951 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.834490 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:53 crc kubenswrapper[4741]: I0226 08:43:53.865102 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.285946 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.291791 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.307953 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.697177 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.700268 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.790499 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794146 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794244 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794331 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794352 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blz9d\" (UniqueName: \"kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794444 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.794480 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899270 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899332 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blz9d\" (UniqueName: \"kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899526 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899579 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899678 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.899789 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.904273 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.906239 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.908410 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.908733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.910457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.963922 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gbd2f"] Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.989617 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blz9d\" (UniqueName: \"kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d\") pod \"dnsmasq-dns-79b5d74c8c-z7vs2\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:54 crc kubenswrapper[4741]: I0226 08:43:54.990770 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.003516 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.003803 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.050778 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gbd2f"] Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.056310 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.128354 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.128874 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxj7\" (UniqueName: \"kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.129059 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.129239 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.247174 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.247273 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpxj7\" (UniqueName: \"kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.247644 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.247844 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.259081 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.261771 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.264973 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.328912 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpxj7\" (UniqueName: \"kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7\") pod \"nova-cell1-cell-mapping-gbd2f\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.410936 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:43:55 crc kubenswrapper[4741]: W0226 08:43:55.927135 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5ef4ab_5d3a_41b5_965e_7579768d32b8.slice/crio-4ca4573fb200790751d87aea0db009635e3c038a41dda53a62d128a52630c942 WatchSource:0}: Error finding container 4ca4573fb200790751d87aea0db009635e3c038a41dda53a62d128a52630c942: Status 404 returned error can't find the container with id 4ca4573fb200790751d87aea0db009635e3c038a41dda53a62d128a52630c942 Feb 26 08:43:55 crc kubenswrapper[4741]: I0226 08:43:55.934571 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:43:56 crc kubenswrapper[4741]: I0226 08:43:56.159445 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gbd2f"] Feb 26 08:43:56 crc kubenswrapper[4741]: I0226 08:43:56.363593 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gbd2f" event={"ID":"60020180-cead-4bfa-bd7c-7637b12f274c","Type":"ContainerStarted","Data":"6012eafdd3c43c7b466899a8e4336b9c512d79040b050ac869797dc41cf2e8d4"} Feb 26 08:43:56 crc kubenswrapper[4741]: I0226 08:43:56.366214 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" event={"ID":"bd5ef4ab-5d3a-41b5-965e-7579768d32b8","Type":"ContainerStarted","Data":"4ca4573fb200790751d87aea0db009635e3c038a41dda53a62d128a52630c942"} Feb 26 08:43:57 crc kubenswrapper[4741]: I0226 08:43:57.381115 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gbd2f" event={"ID":"60020180-cead-4bfa-bd7c-7637b12f274c","Type":"ContainerStarted","Data":"68bd1024267507e2eaae7b2ae93d53e7c88a93cd775721defce9be561b65351c"} Feb 26 08:43:57 crc kubenswrapper[4741]: I0226 08:43:57.387178 4741 generic.go:334] "Generic (PLEG): container finished" podID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerID="56f0a7e83ec0e3dd5f751d65a461cd086d3fe389cddbec995fd4a2b39ab138f6" exitCode=0 Feb 26 08:43:57 crc kubenswrapper[4741]: I0226 08:43:57.387395 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" event={"ID":"bd5ef4ab-5d3a-41b5-965e-7579768d32b8","Type":"ContainerDied","Data":"56f0a7e83ec0e3dd5f751d65a461cd086d3fe389cddbec995fd4a2b39ab138f6"} Feb 26 08:43:57 crc kubenswrapper[4741]: I0226 08:43:57.432921 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gbd2f" podStartSLOduration=3.432892986 podStartE2EDuration="3.432892986s" podCreationTimestamp="2026-02-26 08:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:57.414423531 +0000 UTC m=+1872.410360918" watchObservedRunningTime="2026-02-26 08:43:57.432892986 +0000 UTC m=+1872.428830373" Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.115708 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.116291 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-log" containerID="cri-o://1535b1c95ebb92fc44cbd410cdcb1b0c9a1cfd4ce837834ab94f0a824df051b8" gracePeriod=30 Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.116937 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-api" containerID="cri-o://f6247c8fe21597c6a5eab33584f7f3fe2d593d03a7a304851acb72eb9ed4ebd4" gracePeriod=30 Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.420846 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" event={"ID":"bd5ef4ab-5d3a-41b5-965e-7579768d32b8","Type":"ContainerStarted","Data":"33af1e4eec86045af6d96cf3b23e28ff3d21b6e5ec6442c5028cd9c57dd3f076"} Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.421348 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.480314 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" podStartSLOduration=4.480282111 podStartE2EDuration="4.480282111s" podCreationTimestamp="2026-02-26 08:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:43:58.464378739 +0000 UTC m=+1873.460316156" watchObservedRunningTime="2026-02-26 08:43:58.480282111 +0000 UTC m=+1873.476219498" Feb 26 08:43:58 crc kubenswrapper[4741]: I0226 08:43:58.788300 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:43:58 crc kubenswrapper[4741]: E0226 08:43:58.788545 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.031308 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.031679 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-central-agent" containerID="cri-o://0df4530d23839a5fc0b811aae6d533be5478bfd93da57fa77ffcfecbac934286" gracePeriod=30 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.032259 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" containerID="cri-o://8f2a5a877667631f27652398b359c57386cfc3009e460a62efb8874a50da4aac" gracePeriod=30 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.032314 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="sg-core" containerID="cri-o://43f68a3507f7762e3c76f5606a43d8471942c401269b4946fd1c93e130aa6e1b" gracePeriod=30 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.032252 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-notification-agent" containerID="cri-o://a1ecd9594d63c8baf4bb2ab4df3025ea093d27981f0be7e4aeda66fbfa6923cc" gracePeriod=30 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.043541 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.9:3000/\": read tcp 10.217.0.2:52772->10.217.1.9:3000: read: connection reset by peer" Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.449974 4741 generic.go:334] "Generic (PLEG): container finished" podID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerID="1535b1c95ebb92fc44cbd410cdcb1b0c9a1cfd4ce837834ab94f0a824df051b8" exitCode=143 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.450555 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerDied","Data":"1535b1c95ebb92fc44cbd410cdcb1b0c9a1cfd4ce837834ab94f0a824df051b8"} Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.459220 4741 generic.go:334] "Generic (PLEG): container finished" podID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerID="8f2a5a877667631f27652398b359c57386cfc3009e460a62efb8874a50da4aac" exitCode=0 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.459266 4741 generic.go:334] "Generic (PLEG): container finished" podID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerID="43f68a3507f7762e3c76f5606a43d8471942c401269b4946fd1c93e130aa6e1b" exitCode=2 Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.460729 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerDied","Data":"8f2a5a877667631f27652398b359c57386cfc3009e460a62efb8874a50da4aac"} Feb 26 08:43:59 crc kubenswrapper[4741]: I0226 08:43:59.460774 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerDied","Data":"43f68a3507f7762e3c76f5606a43d8471942c401269b4946fd1c93e130aa6e1b"} Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.150930 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534924-qblgp"] Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.159121 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.163657 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.163929 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.164194 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.166273 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534924-qblgp"] Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.274603 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb29n\" (UniqueName: \"kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n\") pod \"auto-csr-approver-29534924-qblgp\" (UID: \"458edee3-5c0d-45a1-93e3-80a518d7a3e8\") " pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.378419 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb29n\" (UniqueName: \"kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n\") pod \"auto-csr-approver-29534924-qblgp\" (UID: \"458edee3-5c0d-45a1-93e3-80a518d7a3e8\") " pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.414020 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb29n\" (UniqueName: \"kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n\") pod \"auto-csr-approver-29534924-qblgp\" (UID: \"458edee3-5c0d-45a1-93e3-80a518d7a3e8\") " pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.482538 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.486480 4741 generic.go:334] "Generic (PLEG): container finished" podID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerID="0df4530d23839a5fc0b811aae6d533be5478bfd93da57fa77ffcfecbac934286" exitCode=0 Feb 26 08:44:00 crc kubenswrapper[4741]: I0226 08:44:00.486700 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerDied","Data":"0df4530d23839a5fc0b811aae6d533be5478bfd93da57fa77ffcfecbac934286"} Feb 26 08:44:01 crc kubenswrapper[4741]: I0226 08:44:01.078848 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534924-qblgp"] Feb 26 08:44:01 crc kubenswrapper[4741]: W0226 08:44:01.108897 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod458edee3_5c0d_45a1_93e3_80a518d7a3e8.slice/crio-c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a WatchSource:0}: Error finding container c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a: Status 404 returned error can't find the container with id c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a Feb 26 08:44:01 crc kubenswrapper[4741]: I0226 08:44:01.272571 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.9:3000/\": dial tcp 10.217.1.9:3000: connect: connection refused" Feb 26 08:44:01 crc kubenswrapper[4741]: I0226 08:44:01.506340 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534924-qblgp" event={"ID":"458edee3-5c0d-45a1-93e3-80a518d7a3e8","Type":"ContainerStarted","Data":"c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a"} Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.526924 4741 generic.go:334] "Generic (PLEG): container finished" podID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerID="f6247c8fe21597c6a5eab33584f7f3fe2d593d03a7a304851acb72eb9ed4ebd4" exitCode=0 Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.527296 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerDied","Data":"f6247c8fe21597c6a5eab33584f7f3fe2d593d03a7a304851acb72eb9ed4ebd4"} Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.791251 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.888725 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs\") pod \"6580ec03-64a0-49af-b44b-3f29d41b779e\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.888887 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data\") pod \"6580ec03-64a0-49af-b44b-3f29d41b779e\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.888986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h84cw\" (UniqueName: \"kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw\") pod \"6580ec03-64a0-49af-b44b-3f29d41b779e\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.889055 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle\") pod \"6580ec03-64a0-49af-b44b-3f29d41b779e\" (UID: \"6580ec03-64a0-49af-b44b-3f29d41b779e\") " Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.891514 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs" (OuterVolumeSpecName: "logs") pod "6580ec03-64a0-49af-b44b-3f29d41b779e" (UID: "6580ec03-64a0-49af-b44b-3f29d41b779e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:02 crc kubenswrapper[4741]: I0226 08:44:02.963794 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw" (OuterVolumeSpecName: "kube-api-access-h84cw") pod "6580ec03-64a0-49af-b44b-3f29d41b779e" (UID: "6580ec03-64a0-49af-b44b-3f29d41b779e"). InnerVolumeSpecName "kube-api-access-h84cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.002430 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6580ec03-64a0-49af-b44b-3f29d41b779e" (UID: "6580ec03-64a0-49af-b44b-3f29d41b779e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.019062 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6580ec03-64a0-49af-b44b-3f29d41b779e-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.019412 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h84cw\" (UniqueName: \"kubernetes.io/projected/6580ec03-64a0-49af-b44b-3f29d41b779e-kube-api-access-h84cw\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.019429 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.019186 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data" (OuterVolumeSpecName: "config-data") pod "6580ec03-64a0-49af-b44b-3f29d41b779e" (UID: "6580ec03-64a0-49af-b44b-3f29d41b779e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.140382 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6580ec03-64a0-49af-b44b-3f29d41b779e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.543911 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.545288 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6580ec03-64a0-49af-b44b-3f29d41b779e","Type":"ContainerDied","Data":"795ddee3e848625bb60f7b4fd13e56c47feacab747f7dd82f9d677bcff49794b"} Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.545488 4741 scope.go:117] "RemoveContainer" containerID="f6247c8fe21597c6a5eab33584f7f3fe2d593d03a7a304851acb72eb9ed4ebd4" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.551842 4741 generic.go:334] "Generic (PLEG): container finished" podID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerID="b09e9d16a754b4ae8e75853f2b6418814758d162a259ecfcb734ecfff2d66ed9" exitCode=137 Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.551924 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerDied","Data":"b09e9d16a754b4ae8e75853f2b6418814758d162a259ecfcb734ecfff2d66ed9"} Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.563622 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534924-qblgp" event={"ID":"458edee3-5c0d-45a1-93e3-80a518d7a3e8","Type":"ContainerStarted","Data":"b0ba8ab0528f3974a0418972692b89cb4c16621b64431f99c865fe7c4a74fc3f"} Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.594866 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534924-qblgp" podStartSLOduration=2.163257751 podStartE2EDuration="3.594839507s" podCreationTimestamp="2026-02-26 08:44:00 +0000 UTC" firstStartedPulling="2026-02-26 08:44:01.11251812 +0000 UTC m=+1876.108455507" lastFinishedPulling="2026-02-26 08:44:02.544099876 +0000 UTC m=+1877.540037263" observedRunningTime="2026-02-26 08:44:03.585652946 +0000 UTC m=+1878.581590553" watchObservedRunningTime="2026-02-26 08:44:03.594839507 +0000 UTC m=+1878.590776914" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.614269 4741 scope.go:117] "RemoveContainer" containerID="1535b1c95ebb92fc44cbd410cdcb1b0c9a1cfd4ce837834ab94f0a824df051b8" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.646999 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.680565 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.701203 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:03 crc kubenswrapper[4741]: E0226 08:44:03.702054 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-api" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.702071 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-api" Feb 26 08:44:03 crc kubenswrapper[4741]: E0226 08:44:03.702081 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-log" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.702089 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-log" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.702401 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-api" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.702425 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" containerName="nova-api-log" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.704667 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.710605 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.710850 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.711067 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.717451 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.762620 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfqq\" (UniqueName: \"kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.762729 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.765217 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.765282 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.765337 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.765374 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.818223 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6580ec03-64a0-49af-b44b-3f29d41b779e" path="/var/lib/kubelet/pods/6580ec03-64a0-49af-b44b-3f29d41b779e/volumes" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868012 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868087 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfqq\" (UniqueName: \"kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868257 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868779 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868831 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.868867 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.870312 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.878986 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.880484 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.883692 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.891766 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfqq\" (UniqueName: \"kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:03 crc kubenswrapper[4741]: I0226 08:44:03.892149 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data\") pod \"nova-api-0\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " pod="openstack/nova-api-0" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.066138 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.293790 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.490746 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8mg\" (UniqueName: \"kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg\") pod \"f44805d0-fcfe-4241-b658-dd8d905936fa\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.490881 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data\") pod \"f44805d0-fcfe-4241-b658-dd8d905936fa\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.491815 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts\") pod \"f44805d0-fcfe-4241-b658-dd8d905936fa\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.491875 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle\") pod \"f44805d0-fcfe-4241-b658-dd8d905936fa\" (UID: \"f44805d0-fcfe-4241-b658-dd8d905936fa\") " Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.498485 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts" (OuterVolumeSpecName: "scripts") pod "f44805d0-fcfe-4241-b658-dd8d905936fa" (UID: "f44805d0-fcfe-4241-b658-dd8d905936fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.501315 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg" (OuterVolumeSpecName: "kube-api-access-kr8mg") pod "f44805d0-fcfe-4241-b658-dd8d905936fa" (UID: "f44805d0-fcfe-4241-b658-dd8d905936fa"). InnerVolumeSpecName "kube-api-access-kr8mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.598099 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.619991 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8mg\" (UniqueName: \"kubernetes.io/projected/f44805d0-fcfe-4241-b658-dd8d905936fa-kube-api-access-kr8mg\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.649302 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.649779 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f44805d0-fcfe-4241-b658-dd8d905936fa","Type":"ContainerDied","Data":"6f9d19dfe21498b108f314cdefc2ffe5f6caed43014f5c43760e9c1a942ca375"} Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.649874 4741 scope.go:117] "RemoveContainer" containerID="b09e9d16a754b4ae8e75853f2b6418814758d162a259ecfcb734ecfff2d66ed9" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.657848 4741 generic.go:334] "Generic (PLEG): container finished" podID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerID="a1ecd9594d63c8baf4bb2ab4df3025ea093d27981f0be7e4aeda66fbfa6923cc" exitCode=0 Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.659872 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerDied","Data":"a1ecd9594d63c8baf4bb2ab4df3025ea093d27981f0be7e4aeda66fbfa6923cc"} Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.670102 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data" (OuterVolumeSpecName: "config-data") pod "f44805d0-fcfe-4241-b658-dd8d905936fa" (UID: "f44805d0-fcfe-4241-b658-dd8d905936fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.731966 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.782459 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.809199 4741 scope.go:117] "RemoveContainer" containerID="4dd9b38796bb9d950cd1d723463f9847b72b77aa6e43972a231c73f1138d3181" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.939330 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f44805d0-fcfe-4241-b658-dd8d905936fa" (UID: "f44805d0-fcfe-4241-b658-dd8d905936fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:04 crc kubenswrapper[4741]: I0226 08:44:04.959989 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44805d0-fcfe-4241-b658-dd8d905936fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.068754 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.139139 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.148783 4741 scope.go:117] "RemoveContainer" containerID="78b10d3f71f41913730999ef9d6b01ac9f4fbb51215aeef619971b35a3108158" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.224316 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.272823 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dttnt\" (UniqueName: \"kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273087 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273326 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273393 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273438 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273481 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273537 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.273607 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts\") pod \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\" (UID: \"bb586ac8-cf89-4499-81bf-8798eb5dbdb9\") " Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.277381 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.277637 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.288213 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.290523 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.290572 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.295851 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt" (OuterVolumeSpecName: "kube-api-access-dttnt") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "kube-api-access-dttnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.313174 4741 scope.go:117] "RemoveContainer" containerID="3a212edd7d1a8bf35358313d461d312ac11dc46e618a3388012545557ea9bc3d" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.317471 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts" (OuterVolumeSpecName: "scripts") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.353962 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.355009 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="dnsmasq-dns" containerID="cri-o://9ea4157099a711c21f9106b8271267a0332e3615f60d9fdee52b05841234c597" gracePeriod=10 Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.399818 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.399871 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dttnt\" (UniqueName: \"kubernetes.io/projected/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-kube-api-access-dttnt\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.400774 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434037 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-listener" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434063 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-listener" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434085 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-api" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434091 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-api" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434127 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-notification-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434134 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-notification-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434160 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-central-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434166 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-central-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434177 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-evaluator" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434182 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-evaluator" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434193 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="sg-core" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434202 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="sg-core" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434214 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434222 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" Feb 26 08:44:05 crc kubenswrapper[4741]: E0226 08:44:05.434241 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-notifier" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434247 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-notifier" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434478 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-listener" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434487 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-api" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434502 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-notifier" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434521 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-notification-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434534 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="sg-core" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434552 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" containerName="aodh-evaluator" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434558 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="proxy-httpd" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.434566 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" containerName="ceilometer-central-agent" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.438208 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.442561 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.443098 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.443287 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tlszt" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.443665 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.443909 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.472780 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504011 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504243 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504274 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzkxn\" (UniqueName: \"kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504330 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504421 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.504521 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.528304 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.541554 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.606910 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.606972 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzkxn\" (UniqueName: \"kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607030 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607138 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607256 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607417 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.607443 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.612558 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.619017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.619977 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.622524 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.627346 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.638984 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data" (OuterVolumeSpecName: "config-data") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.644145 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzkxn\" (UniqueName: \"kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn\") pod \"aodh-0\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.663377 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb586ac8-cf89-4499-81bf-8798eb5dbdb9" (UID: "bb586ac8-cf89-4499-81bf-8798eb5dbdb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.679151 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.696166 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb586ac8-cf89-4499-81bf-8798eb5dbdb9","Type":"ContainerDied","Data":"9c3c6b761c4a12da40a6b0204886700fb4a01fa94aa82262bb1aee7ace2ff8a5"} Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.696367 4741 scope.go:117] "RemoveContainer" containerID="8f2a5a877667631f27652398b359c57386cfc3009e460a62efb8874a50da4aac" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.696657 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.722721 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.722754 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb586ac8-cf89-4499-81bf-8798eb5dbdb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.724876 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerStarted","Data":"08645f262c08d05548cc94d1fc883457137e17c0fa946ff61d44eb1dd512136e"} Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.738303 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerID="9ea4157099a711c21f9106b8271267a0332e3615f60d9fdee52b05841234c597" exitCode=0 Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.740301 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" event={"ID":"3b59536e-ad66-4a9d-89a6-2a6479e8be01","Type":"ContainerDied","Data":"9ea4157099a711c21f9106b8271267a0332e3615f60d9fdee52b05841234c597"} Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.835100 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44805d0-fcfe-4241-b658-dd8d905936fa" path="/var/lib/kubelet/pods/f44805d0-fcfe-4241-b658-dd8d905936fa/volumes" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.836729 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.853325 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.871540 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.887596 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.891154 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.891797 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.893182 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:44:05 crc kubenswrapper[4741]: I0226 08:44:05.905307 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.002757 4741 scope.go:117] "RemoveContainer" containerID="43f68a3507f7762e3c76f5606a43d8471942c401269b4946fd1c93e130aa6e1b" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.038270 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.038674 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.038856 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.038991 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.039143 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.039374 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dj4j\" (UniqueName: \"kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.039638 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.039760 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142420 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142475 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142535 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142613 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142676 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142716 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142751 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.142805 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dj4j\" (UniqueName: \"kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.146309 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.150285 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.150929 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.160818 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.165640 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.165980 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.172204 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.174841 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dj4j\" (UniqueName: \"kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j\") pod \"ceilometer-0\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.231382 4741 scope.go:117] "RemoveContainer" containerID="a1ecd9594d63c8baf4bb2ab4df3025ea093d27981f0be7e4aeda66fbfa6923cc" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.327831 4741 scope.go:117] "RemoveContainer" containerID="0df4530d23839a5fc0b811aae6d533be5478bfd93da57fa77ffcfecbac934286" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.358653 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.375499 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452257 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452377 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9pj\" (UniqueName: \"kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452454 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452557 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452580 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.452745 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc\") pod \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\" (UID: \"3b59536e-ad66-4a9d-89a6-2a6479e8be01\") " Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.461700 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj" (OuterVolumeSpecName: "kube-api-access-qs9pj") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "kube-api-access-qs9pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.534084 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.561285 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9pj\" (UniqueName: \"kubernetes.io/projected/3b59536e-ad66-4a9d-89a6-2a6479e8be01-kube-api-access-qs9pj\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.561326 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.562495 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.571874 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.584625 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config" (OuterVolumeSpecName: "config") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.591817 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3b59536e-ad66-4a9d-89a6-2a6479e8be01" (UID: "3b59536e-ad66-4a9d-89a6-2a6479e8be01"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.666012 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.666044 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.666092 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.666138 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b59536e-ad66-4a9d-89a6-2a6479e8be01-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.683388 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.762388 4741 generic.go:334] "Generic (PLEG): container finished" podID="60020180-cead-4bfa-bd7c-7637b12f274c" containerID="68bd1024267507e2eaae7b2ae93d53e7c88a93cd775721defce9be561b65351c" exitCode=0 Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.762484 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gbd2f" event={"ID":"60020180-cead-4bfa-bd7c-7637b12f274c","Type":"ContainerDied","Data":"68bd1024267507e2eaae7b2ae93d53e7c88a93cd775721defce9be561b65351c"} Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.771995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerStarted","Data":"1d5adda941d4ae80c34f9e915fb2c7e21980b61320dec5b8ceab6faa91a8c80e"} Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.772073 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerStarted","Data":"0038a7dfef14536771acb3548c4bf18989d0d1258e179e567abb6fb54aec51c5"} Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.773408 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerStarted","Data":"a2b4b1f4fe449a16d32a1ba089453f1850a1ae81a88520c2d433bd29ef2fca96"} Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.779835 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" event={"ID":"3b59536e-ad66-4a9d-89a6-2a6479e8be01","Type":"ContainerDied","Data":"e132243dae9161d9853bff5efdd8d9118670d3e96098180ff73e314aeb6ade67"} Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.779922 4741 scope.go:117] "RemoveContainer" containerID="9ea4157099a711c21f9106b8271267a0332e3615f60d9fdee52b05841234c597" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.781629 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-s26dw" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.819827 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.819805816 podStartE2EDuration="3.819805816s" podCreationTimestamp="2026-02-26 08:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:44:06.806900909 +0000 UTC m=+1881.802838326" watchObservedRunningTime="2026-02-26 08:44:06.819805816 +0000 UTC m=+1881.815743203" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.822054 4741 scope.go:117] "RemoveContainer" containerID="3b4f0d2e7eb6ecc43cc249861696caea39d12311905d5c7ea73e440403ee976f" Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.858015 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:44:06 crc kubenswrapper[4741]: I0226 08:44:06.873272 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-s26dw"] Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.040629 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.797050 4741 generic.go:334] "Generic (PLEG): container finished" podID="458edee3-5c0d-45a1-93e3-80a518d7a3e8" containerID="b0ba8ab0528f3974a0418972692b89cb4c16621b64431f99c865fe7c4a74fc3f" exitCode=0 Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.803897 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" path="/var/lib/kubelet/pods/3b59536e-ad66-4a9d-89a6-2a6479e8be01/volumes" Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.804890 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb586ac8-cf89-4499-81bf-8798eb5dbdb9" path="/var/lib/kubelet/pods/bb586ac8-cf89-4499-81bf-8798eb5dbdb9/volumes" Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.805916 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534924-qblgp" event={"ID":"458edee3-5c0d-45a1-93e3-80a518d7a3e8","Type":"ContainerDied","Data":"b0ba8ab0528f3974a0418972692b89cb4c16621b64431f99c865fe7c4a74fc3f"} Feb 26 08:44:07 crc kubenswrapper[4741]: I0226 08:44:07.805981 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerStarted","Data":"9cf59402e3182d7fef17dd53fe3af7e595653e911e91edc5d067431eb599478c"} Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.519755 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.592206 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts\") pod \"60020180-cead-4bfa-bd7c-7637b12f274c\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.592534 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data\") pod \"60020180-cead-4bfa-bd7c-7637b12f274c\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.592687 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpxj7\" (UniqueName: \"kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7\") pod \"60020180-cead-4bfa-bd7c-7637b12f274c\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.592787 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle\") pod \"60020180-cead-4bfa-bd7c-7637b12f274c\" (UID: \"60020180-cead-4bfa-bd7c-7637b12f274c\") " Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.598545 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7" (OuterVolumeSpecName: "kube-api-access-lpxj7") pod "60020180-cead-4bfa-bd7c-7637b12f274c" (UID: "60020180-cead-4bfa-bd7c-7637b12f274c"). InnerVolumeSpecName "kube-api-access-lpxj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.599363 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts" (OuterVolumeSpecName: "scripts") pod "60020180-cead-4bfa-bd7c-7637b12f274c" (UID: "60020180-cead-4bfa-bd7c-7637b12f274c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.637775 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data" (OuterVolumeSpecName: "config-data") pod "60020180-cead-4bfa-bd7c-7637b12f274c" (UID: "60020180-cead-4bfa-bd7c-7637b12f274c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.642904 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60020180-cead-4bfa-bd7c-7637b12f274c" (UID: "60020180-cead-4bfa-bd7c-7637b12f274c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.696643 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.696683 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.696695 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpxj7\" (UniqueName: \"kubernetes.io/projected/60020180-cead-4bfa-bd7c-7637b12f274c-kube-api-access-lpxj7\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.696706 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60020180-cead-4bfa-bd7c-7637b12f274c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.830406 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerStarted","Data":"3d148d4b00303cb4a46103de4c8495206800e46b6c580fd55481f597dd419a1c"} Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.832632 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerStarted","Data":"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2"} Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.834518 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gbd2f" Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.834502 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gbd2f" event={"ID":"60020180-cead-4bfa-bd7c-7637b12f274c","Type":"ContainerDied","Data":"6012eafdd3c43c7b466899a8e4336b9c512d79040b050ac869797dc41cf2e8d4"} Feb 26 08:44:08 crc kubenswrapper[4741]: I0226 08:44:08.834581 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6012eafdd3c43c7b466899a8e4336b9c512d79040b050ac869797dc41cf2e8d4" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.033579 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.033963 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-log" containerID="cri-o://0038a7dfef14536771acb3548c4bf18989d0d1258e179e567abb6fb54aec51c5" gracePeriod=30 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.034870 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-api" containerID="cri-o://1d5adda941d4ae80c34f9e915fb2c7e21980b61320dec5b8ceab6faa91a8c80e" gracePeriod=30 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.084853 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.085231 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerName="nova-scheduler-scheduler" containerID="cri-o://e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" gracePeriod=30 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.114220 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.114976 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" containerID="cri-o://445c74bf98826c26e2dee795c1ea3bf40f32b5c43ff7efc10a0393a9e2444451" gracePeriod=30 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.116011 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" containerID="cri-o://26392209e63d2b40031162baae2501b93f73176c4b2f6a7c8917da1a47404504" gracePeriod=30 Feb 26 08:44:09 crc kubenswrapper[4741]: E0226 08:44:09.343652 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:44:09 crc kubenswrapper[4741]: E0226 08:44:09.347002 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:44:09 crc kubenswrapper[4741]: E0226 08:44:09.348617 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 08:44:09 crc kubenswrapper[4741]: E0226 08:44:09.348743 4741 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerName="nova-scheduler-scheduler" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.389863 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.529393 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb29n\" (UniqueName: \"kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n\") pod \"458edee3-5c0d-45a1-93e3-80a518d7a3e8\" (UID: \"458edee3-5c0d-45a1-93e3-80a518d7a3e8\") " Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.545375 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n" (OuterVolumeSpecName: "kube-api-access-jb29n") pod "458edee3-5c0d-45a1-93e3-80a518d7a3e8" (UID: "458edee3-5c0d-45a1-93e3-80a518d7a3e8"). InnerVolumeSpecName "kube-api-access-jb29n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.633469 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb29n\" (UniqueName: \"kubernetes.io/projected/458edee3-5c0d-45a1-93e3-80a518d7a3e8-kube-api-access-jb29n\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.853647 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534924-qblgp" event={"ID":"458edee3-5c0d-45a1-93e3-80a518d7a3e8","Type":"ContainerDied","Data":"c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a"} Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.853700 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c72e0841c328c415d9dfd206a385958095395003839101fa0d08b6d0209d124a" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.853776 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534924-qblgp" Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.865992 4741 generic.go:334] "Generic (PLEG): container finished" podID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerID="445c74bf98826c26e2dee795c1ea3bf40f32b5c43ff7efc10a0393a9e2444451" exitCode=143 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.866082 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerDied","Data":"445c74bf98826c26e2dee795c1ea3bf40f32b5c43ff7efc10a0393a9e2444451"} Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.877940 4741 generic.go:334] "Generic (PLEG): container finished" podID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerID="1d5adda941d4ae80c34f9e915fb2c7e21980b61320dec5b8ceab6faa91a8c80e" exitCode=0 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.877985 4741 generic.go:334] "Generic (PLEG): container finished" podID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerID="0038a7dfef14536771acb3548c4bf18989d0d1258e179e567abb6fb54aec51c5" exitCode=143 Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.878013 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerDied","Data":"1d5adda941d4ae80c34f9e915fb2c7e21980b61320dec5b8ceab6faa91a8c80e"} Feb 26 08:44:09 crc kubenswrapper[4741]: I0226 08:44:09.878053 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerDied","Data":"0038a7dfef14536771acb3548c4bf18989d0d1258e179e567abb6fb54aec51c5"} Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.080291 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534918-npnn9"] Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.094489 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534918-npnn9"] Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.580847 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.668813 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.669205 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.669477 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.670421 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs" (OuterVolumeSpecName: "logs") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.672799 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.672880 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfqq\" (UniqueName: \"kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.672932 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs\") pod \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\" (UID: \"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d\") " Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.674761 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.703374 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq" (OuterVolumeSpecName: "kube-api-access-mmfqq") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "kube-api-access-mmfqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.780014 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmfqq\" (UniqueName: \"kubernetes.io/projected/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-kube-api-access-mmfqq\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.869440 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.887800 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.901811 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a9bbb96-c5b1-40d3-9fd5-79df6c55381d","Type":"ContainerDied","Data":"08645f262c08d05548cc94d1fc883457137e17c0fa946ff61d44eb1dd512136e"} Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.901894 4741 scope.go:117] "RemoveContainer" containerID="1d5adda941d4ae80c34f9e915fb2c7e21980b61320dec5b8ceab6faa91a8c80e" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.901934 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.903870 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data" (OuterVolumeSpecName: "config-data") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.931405 4741 scope.go:117] "RemoveContainer" containerID="0038a7dfef14536771acb3548c4bf18989d0d1258e179e567abb6fb54aec51c5" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.932551 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.954821 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" (UID: "7a9bbb96-c5b1-40d3-9fd5-79df6c55381d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.993023 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.993066 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:10 crc kubenswrapper[4741]: I0226 08:44:10.993075 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.255937 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.272100 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298034 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298805 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-log" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298827 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-log" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298848 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458edee3-5c0d-45a1-93e3-80a518d7a3e8" containerName="oc" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298854 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="458edee3-5c0d-45a1-93e3-80a518d7a3e8" containerName="oc" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298893 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60020180-cead-4bfa-bd7c-7637b12f274c" containerName="nova-manage" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298899 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="60020180-cead-4bfa-bd7c-7637b12f274c" containerName="nova-manage" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298911 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-api" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298920 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-api" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298940 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="init" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298945 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="init" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.298972 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="dnsmasq-dns" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.298979 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="dnsmasq-dns" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.299264 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="458edee3-5c0d-45a1-93e3-80a518d7a3e8" containerName="oc" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.299279 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-log" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.299312 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b59536e-ad66-4a9d-89a6-2a6479e8be01" containerName="dnsmasq-dns" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.299323 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" containerName="nova-api-api" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.299339 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="60020180-cead-4bfa-bd7c-7637b12f274c" containerName="nova-manage" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.300971 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.305505 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.305895 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.306227 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.318305 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.408635 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.409030 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft57h\" (UniqueName: \"kubernetes.io/projected/b3becc56-1879-4497-8208-fb2c62a6f0e4-kube-api-access-ft57h\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.409815 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.410519 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.410901 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3becc56-1879-4497-8208-fb2c62a6f0e4-logs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.410938 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-config-data\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516438 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516591 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516640 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3becc56-1879-4497-8208-fb2c62a6f0e4-logs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516665 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-config-data\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516711 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.516739 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft57h\" (UniqueName: \"kubernetes.io/projected/b3becc56-1879-4497-8208-fb2c62a6f0e4-kube-api-access-ft57h\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.533593 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3becc56-1879-4497-8208-fb2c62a6f0e4-logs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.554679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.555278 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.555827 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-config-data\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.558983 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3becc56-1879-4497-8208-fb2c62a6f0e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.559621 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft57h\" (UniqueName: \"kubernetes.io/projected/b3becc56-1879-4497-8208-fb2c62a6f0e4-kube-api-access-ft57h\") pod \"nova-api-0\" (UID: \"b3becc56-1879-4497-8208-fb2c62a6f0e4\") " pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.629727 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.787032 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:44:11 crc kubenswrapper[4741]: E0226 08:44:11.787487 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.812306 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9bbb96-c5b1-40d3-9fd5-79df6c55381d" path="/var/lib/kubelet/pods/7a9bbb96-c5b1-40d3-9fd5-79df6c55381d/volumes" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.813207 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da47daf-0aba-4cf1-bcd5-585a7b3e2b83" path="/var/lib/kubelet/pods/8da47daf-0aba-4cf1-bcd5-585a7b3e2b83/volumes" Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.939930 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerStarted","Data":"a2a0c2dafc7fd902fa130ef4c1de1dd8dd721a66e63a462f5589e74a199ef466"} Feb 26 08:44:11 crc kubenswrapper[4741]: I0226 08:44:11.959395 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerStarted","Data":"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179"} Feb 26 08:44:12 crc kubenswrapper[4741]: I0226 08:44:12.248902 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 08:44:12 crc kubenswrapper[4741]: W0226 08:44:12.267027 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3becc56_1879_4497_8208_fb2c62a6f0e4.slice/crio-a07e1524727efc7a5e3179319d44bc276ff41b00812633ca71c2ff9e54250995 WatchSource:0}: Error finding container a07e1524727efc7a5e3179319d44bc276ff41b00812633ca71c2ff9e54250995: Status 404 returned error can't find the container with id a07e1524727efc7a5e3179319d44bc276ff41b00812633ca71c2ff9e54250995 Feb 26 08:44:12 crc kubenswrapper[4741]: I0226 08:44:12.557798 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.5:8775/\": read tcp 10.217.0.2:52652->10.217.1.5:8775: read: connection reset by peer" Feb 26 08:44:12 crc kubenswrapper[4741]: I0226 08:44:12.557827 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.5:8775/\": read tcp 10.217.0.2:52656->10.217.1.5:8775: read: connection reset by peer" Feb 26 08:44:12 crc kubenswrapper[4741]: I0226 08:44:12.988796 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerStarted","Data":"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40"} Feb 26 08:44:12 crc kubenswrapper[4741]: I0226 08:44:12.996733 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3becc56-1879-4497-8208-fb2c62a6f0e4","Type":"ContainerStarted","Data":"a07e1524727efc7a5e3179319d44bc276ff41b00812633ca71c2ff9e54250995"} Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.009028 4741 generic.go:334] "Generic (PLEG): container finished" podID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerID="e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" exitCode=0 Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.009217 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2e94a2-7f27-4a01-a33f-11320fb2a81d","Type":"ContainerDied","Data":"e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4"} Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.024343 4741 generic.go:334] "Generic (PLEG): container finished" podID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerID="26392209e63d2b40031162baae2501b93f73176c4b2f6a7c8917da1a47404504" exitCode=0 Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.024407 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerDied","Data":"26392209e63d2b40031162baae2501b93f73176c4b2f6a7c8917da1a47404504"} Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.087844 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.183573 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2kvr\" (UniqueName: \"kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr\") pod \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.184141 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data\") pod \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.184409 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle\") pod \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\" (UID: \"5f2e94a2-7f27-4a01-a33f-11320fb2a81d\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.193390 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr" (OuterVolumeSpecName: "kube-api-access-k2kvr") pod "5f2e94a2-7f27-4a01-a33f-11320fb2a81d" (UID: "5f2e94a2-7f27-4a01-a33f-11320fb2a81d"). InnerVolumeSpecName "kube-api-access-k2kvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.236720 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data" (OuterVolumeSpecName: "config-data") pod "5f2e94a2-7f27-4a01-a33f-11320fb2a81d" (UID: "5f2e94a2-7f27-4a01-a33f-11320fb2a81d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.270232 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f2e94a2-7f27-4a01-a33f-11320fb2a81d" (UID: "5f2e94a2-7f27-4a01-a33f-11320fb2a81d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.288410 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.288450 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.288465 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2kvr\" (UniqueName: \"kubernetes.io/projected/5f2e94a2-7f27-4a01-a33f-11320fb2a81d-kube-api-access-k2kvr\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.702300 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.805502 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs\") pod \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.805601 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv24z\" (UniqueName: \"kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z\") pod \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.805788 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs\") pod \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.805994 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs" (OuterVolumeSpecName: "logs") pod "593cc4eb-043e-484d-a1b5-9d83fbd630e7" (UID: "593cc4eb-043e-484d-a1b5-9d83fbd630e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.806017 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data\") pod \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.806192 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle\") pod \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\" (UID: \"593cc4eb-043e-484d-a1b5-9d83fbd630e7\") " Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.807245 4741 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593cc4eb-043e-484d-a1b5-9d83fbd630e7-logs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.812978 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z" (OuterVolumeSpecName: "kube-api-access-qv24z") pod "593cc4eb-043e-484d-a1b5-9d83fbd630e7" (UID: "593cc4eb-043e-484d-a1b5-9d83fbd630e7"). InnerVolumeSpecName "kube-api-access-qv24z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.849088 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593cc4eb-043e-484d-a1b5-9d83fbd630e7" (UID: "593cc4eb-043e-484d-a1b5-9d83fbd630e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.930861 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data" (OuterVolumeSpecName: "config-data") pod "593cc4eb-043e-484d-a1b5-9d83fbd630e7" (UID: "593cc4eb-043e-484d-a1b5-9d83fbd630e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.947058 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.947144 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv24z\" (UniqueName: \"kubernetes.io/projected/593cc4eb-043e-484d-a1b5-9d83fbd630e7-kube-api-access-qv24z\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:13 crc kubenswrapper[4741]: I0226 08:44:13.953333 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "593cc4eb-043e-484d-a1b5-9d83fbd630e7" (UID: "593cc4eb-043e-484d-a1b5-9d83fbd630e7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.052232 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.052562 4741 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/593cc4eb-043e-484d-a1b5-9d83fbd630e7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.068556 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3becc56-1879-4497-8208-fb2c62a6f0e4","Type":"ContainerStarted","Data":"bd7ca2b90c5c3877ba936f8cd3151368bc5d86c50d0a1faadbdb5793499624dd"} Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.068617 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3becc56-1879-4497-8208-fb2c62a6f0e4","Type":"ContainerStarted","Data":"fbfb421315dc03f37faa8f9b82c477c2c8876d042e1fec57c881b1ba6c41bf12"} Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.071721 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2e94a2-7f27-4a01-a33f-11320fb2a81d","Type":"ContainerDied","Data":"4a92d6b9b7b3932edfed1360763d8349049c65fb1b7b8b095994f57f7f8ade90"} Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.071817 4741 scope.go:117] "RemoveContainer" containerID="e950376008eaadc2bd024799bb52cc5327deb1336b09a5811af5bc3cad2ce9e4" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.072024 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.092833 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"593cc4eb-043e-484d-a1b5-9d83fbd630e7","Type":"ContainerDied","Data":"48c6f9f6cb1575417168de80f8ae444b07dd01f5cad4129c569bc055d0cb65b9"} Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.092967 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.112419 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.112389777 podStartE2EDuration="3.112389777s" podCreationTimestamp="2026-02-26 08:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:44:14.101305352 +0000 UTC m=+1889.097242739" watchObservedRunningTime="2026-02-26 08:44:14.112389777 +0000 UTC m=+1889.108327164" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.130395 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerStarted","Data":"92da333e762c89114f59e568592075c1ecdf64a6fbe590b1f0b4d48018c7e962"} Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.147875 4741 scope.go:117] "RemoveContainer" containerID="26392209e63d2b40031162baae2501b93f73176c4b2f6a7c8917da1a47404504" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.155954 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.199177 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.234369 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: E0226 08:44:14.235334 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.235433 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" Feb 26 08:44:14 crc kubenswrapper[4741]: E0226 08:44:14.235526 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerName="nova-scheduler-scheduler" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.235580 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerName="nova-scheduler-scheduler" Feb 26 08:44:14 crc kubenswrapper[4741]: E0226 08:44:14.235641 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.235701 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.236206 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" containerName="nova-scheduler-scheduler" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.236299 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-metadata" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.236362 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" containerName="nova-metadata-log" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.237372 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.250823 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.251870 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.289797 4741 scope.go:117] "RemoveContainer" containerID="445c74bf98826c26e2dee795c1ea3bf40f32b5c43ff7efc10a0393a9e2444451" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.325976 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.350424 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.374082 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.374249 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-config-data\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.374306 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftm7\" (UniqueName: \"kubernetes.io/projected/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-kube-api-access-pftm7\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.390868 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.393080 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.405261 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.405430 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.422071 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.477551 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.477918 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgg69\" (UniqueName: \"kubernetes.io/projected/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-kube-api-access-vgg69\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.478100 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-config-data\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.478317 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-logs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.478578 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.479945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-config-data\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.480054 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pftm7\" (UniqueName: \"kubernetes.io/projected/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-kube-api-access-pftm7\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.480373 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.500926 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-config-data\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.509917 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pftm7\" (UniqueName: \"kubernetes.io/projected/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-kube-api-access-pftm7\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.512684 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce926f0a-c4d5-4c36-852e-e8c6bc44394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce926f0a-c4d5-4c36-852e-e8c6bc44394e\") " pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.583855 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.584013 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.584037 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgg69\" (UniqueName: \"kubernetes.io/projected/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-kube-api-access-vgg69\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.584085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-config-data\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.584157 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-logs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.584911 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-logs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.590276 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.590689 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-config-data\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.591449 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.603401 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgg69\" (UniqueName: \"kubernetes.io/projected/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-kube-api-access-vgg69\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.603843 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ff256-6ba5-4e6c-b97c-c3a8c15a595b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"495ff256-6ba5-4e6c-b97c-c3a8c15a595b\") " pod="openstack/nova-metadata-0" Feb 26 08:44:14 crc kubenswrapper[4741]: I0226 08:44:14.738634 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.208593 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerStarted","Data":"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9"} Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.267778 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.104048914 podStartE2EDuration="10.267746102s" podCreationTimestamp="2026-02-26 08:44:05 +0000 UTC" firstStartedPulling="2026-02-26 08:44:06.667390843 +0000 UTC m=+1881.663328230" lastFinishedPulling="2026-02-26 08:44:13.831088031 +0000 UTC m=+1888.827025418" observedRunningTime="2026-02-26 08:44:15.234440775 +0000 UTC m=+1890.230378172" watchObservedRunningTime="2026-02-26 08:44:15.267746102 +0000 UTC m=+1890.263683489" Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.440759 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.599028 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.808584 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593cc4eb-043e-484d-a1b5-9d83fbd630e7" path="/var/lib/kubelet/pods/593cc4eb-043e-484d-a1b5-9d83fbd630e7/volumes" Feb 26 08:44:15 crc kubenswrapper[4741]: I0226 08:44:15.809365 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2e94a2-7f27-4a01-a33f-11320fb2a81d" path="/var/lib/kubelet/pods/5f2e94a2-7f27-4a01-a33f-11320fb2a81d/volumes" Feb 26 08:44:16 crc kubenswrapper[4741]: I0226 08:44:16.228989 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce926f0a-c4d5-4c36-852e-e8c6bc44394e","Type":"ContainerStarted","Data":"ec48a3a34cb3fd58737d58a09ca480c9276893aad3408e059fd3b3c644bd0d18"} Feb 26 08:44:16 crc kubenswrapper[4741]: I0226 08:44:16.238266 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"495ff256-6ba5-4e6c-b97c-c3a8c15a595b","Type":"ContainerStarted","Data":"ee8885424bb92320c2f5d3d0f53777a7b5dd9ef7cfddb41e470bfe8971b3a1fe"} Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.252600 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerStarted","Data":"d47fbf53ac3ae197319c30e13d124c0d724b17842773c321b170f5b200266d87"} Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.253272 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.255335 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce926f0a-c4d5-4c36-852e-e8c6bc44394e","Type":"ContainerStarted","Data":"8b952f3e34a74b0473f8f68950dd82d040b82d1850ef6040655cb48144d3d710"} Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.258582 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"495ff256-6ba5-4e6c-b97c-c3a8c15a595b","Type":"ContainerStarted","Data":"f145212827ae3abc03a275be73945adb48934963212c7c8747083c0ddbc40937"} Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.258627 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"495ff256-6ba5-4e6c-b97c-c3a8c15a595b","Type":"ContainerStarted","Data":"6a440eeb2cade776e2f2919af5825be96ad5b05efc77cca8d169f3d2ae6c10ca"} Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.285091 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.489449161 podStartE2EDuration="12.28506618s" podCreationTimestamp="2026-02-26 08:44:05 +0000 UTC" firstStartedPulling="2026-02-26 08:44:07.036692551 +0000 UTC m=+1882.032629938" lastFinishedPulling="2026-02-26 08:44:15.83230957 +0000 UTC m=+1890.828246957" observedRunningTime="2026-02-26 08:44:17.27769222 +0000 UTC m=+1892.273629607" watchObservedRunningTime="2026-02-26 08:44:17.28506618 +0000 UTC m=+1892.281003567" Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.319146 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.319098397 podStartE2EDuration="3.319098397s" podCreationTimestamp="2026-02-26 08:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:44:17.299654135 +0000 UTC m=+1892.295591532" watchObservedRunningTime="2026-02-26 08:44:17.319098397 +0000 UTC m=+1892.315035784" Feb 26 08:44:17 crc kubenswrapper[4741]: I0226 08:44:17.335207 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.335181534 podStartE2EDuration="3.335181534s" podCreationTimestamp="2026-02-26 08:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:44:17.324627705 +0000 UTC m=+1892.320565112" watchObservedRunningTime="2026-02-26 08:44:17.335181534 +0000 UTC m=+1892.331118921" Feb 26 08:44:19 crc kubenswrapper[4741]: I0226 08:44:19.591659 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 08:44:19 crc kubenswrapper[4741]: I0226 08:44:19.739465 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:44:19 crc kubenswrapper[4741]: I0226 08:44:19.739526 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 08:44:21 crc kubenswrapper[4741]: I0226 08:44:21.630873 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:44:21 crc kubenswrapper[4741]: I0226 08:44:21.632546 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 08:44:22 crc kubenswrapper[4741]: I0226 08:44:22.645333 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3becc56-1879-4497-8208-fb2c62a6f0e4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:44:22 crc kubenswrapper[4741]: I0226 08:44:22.645408 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3becc56-1879-4497-8208-fb2c62a6f0e4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:44:24 crc kubenswrapper[4741]: I0226 08:44:24.592137 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 08:44:24 crc kubenswrapper[4741]: I0226 08:44:24.625208 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 08:44:24 crc kubenswrapper[4741]: I0226 08:44:24.740288 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 08:44:24 crc kubenswrapper[4741]: I0226 08:44:24.740722 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 08:44:25 crc kubenswrapper[4741]: I0226 08:44:25.418587 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 08:44:25 crc kubenswrapper[4741]: I0226 08:44:25.753343 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="495ff256-6ba5-4e6c-b97c-c3a8c15a595b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.20:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:44:25 crc kubenswrapper[4741]: I0226 08:44:25.753405 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="495ff256-6ba5-4e6c-b97c-c3a8c15a595b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.20:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 08:44:26 crc kubenswrapper[4741]: I0226 08:44:26.788247 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:44:26 crc kubenswrapper[4741]: E0226 08:44:26.789255 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:44:31 crc kubenswrapper[4741]: I0226 08:44:31.852631 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 08:44:31 crc kubenswrapper[4741]: I0226 08:44:31.854749 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:44:31 crc kubenswrapper[4741]: I0226 08:44:31.861670 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 08:44:31 crc kubenswrapper[4741]: I0226 08:44:31.866663 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 08:44:32 crc kubenswrapper[4741]: I0226 08:44:32.475326 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 08:44:32 crc kubenswrapper[4741]: I0226 08:44:32.482856 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 08:44:34 crc kubenswrapper[4741]: I0226 08:44:34.745620 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 08:44:34 crc kubenswrapper[4741]: I0226 08:44:34.747449 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 08:44:34 crc kubenswrapper[4741]: I0226 08:44:34.752981 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 08:44:35 crc kubenswrapper[4741]: I0226 08:44:35.602704 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 08:44:36 crc kubenswrapper[4741]: I0226 08:44:36.397119 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 08:44:38 crc kubenswrapper[4741]: I0226 08:44:38.789419 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:44:38 crc kubenswrapper[4741]: E0226 08:44:38.790023 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.805841 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-8n574"] Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.806537 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-8n574"] Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.894016 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-hng7c"] Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.896030 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.957656 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hng7c"] Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.989575 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.989782 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7j9\" (UniqueName: \"kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:47 crc kubenswrapper[4741]: I0226 08:44:47.989900 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.093059 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.093538 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7j9\" (UniqueName: \"kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.093643 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.103613 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.108679 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.110161 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7j9\" (UniqueName: \"kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9\") pod \"heat-db-sync-hng7c\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.263417 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hng7c" Feb 26 08:44:48 crc kubenswrapper[4741]: I0226 08:44:48.780397 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hng7c"] Feb 26 08:44:49 crc kubenswrapper[4741]: I0226 08:44:49.747592 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hng7c" event={"ID":"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3","Type":"ContainerStarted","Data":"7d7773cf41e0d7014488d78c9f9b9922ee0c941a3fdb395d4018ecf488f5b776"} Feb 26 08:44:49 crc kubenswrapper[4741]: I0226 08:44:49.813484 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61912c33-f4b2-4d1e-a2a0-df63c70ac97f" path="/var/lib/kubelet/pods/61912c33-f4b2-4d1e-a2a0-df63c70ac97f/volumes" Feb 26 08:44:49 crc kubenswrapper[4741]: I0226 08:44:49.941411 4741 scope.go:117] "RemoveContainer" containerID="4f564089c14874fe96bd1bc26baaff5abf64f083fde542dd389dcfa57693b7ea" Feb 26 08:44:49 crc kubenswrapper[4741]: I0226 08:44:49.943030 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.162921 4741 scope.go:117] "RemoveContainer" containerID="10ef8d5ce0856ab52a2b216b991f207ecd121aa918160cd4cb37863bdec80621" Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.477502 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.478152 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-central-agent" containerID="cri-o://3d148d4b00303cb4a46103de4c8495206800e46b6c580fd55481f597dd419a1c" gracePeriod=30 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.478216 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="sg-core" containerID="cri-o://92da333e762c89114f59e568592075c1ecdf64a6fbe590b1f0b4d48018c7e962" gracePeriod=30 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.478232 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="proxy-httpd" containerID="cri-o://d47fbf53ac3ae197319c30e13d124c0d724b17842773c321b170f5b200266d87" gracePeriod=30 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.478244 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-notification-agent" containerID="cri-o://a2a0c2dafc7fd902fa130ef4c1de1dd8dd721a66e63a462f5589e74a199ef466" gracePeriod=30 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.818163 4741 generic.go:334] "Generic (PLEG): container finished" podID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerID="d47fbf53ac3ae197319c30e13d124c0d724b17842773c321b170f5b200266d87" exitCode=0 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.818425 4741 generic.go:334] "Generic (PLEG): container finished" podID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerID="92da333e762c89114f59e568592075c1ecdf64a6fbe590b1f0b4d48018c7e962" exitCode=2 Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.818267 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerDied","Data":"d47fbf53ac3ae197319c30e13d124c0d724b17842773c321b170f5b200266d87"} Feb 26 08:44:50 crc kubenswrapper[4741]: I0226 08:44:50.818481 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerDied","Data":"92da333e762c89114f59e568592075c1ecdf64a6fbe590b1f0b4d48018c7e962"} Feb 26 08:44:51 crc kubenswrapper[4741]: I0226 08:44:51.186642 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:44:51 crc kubenswrapper[4741]: I0226 08:44:51.837474 4741 generic.go:334] "Generic (PLEG): container finished" podID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerID="3d148d4b00303cb4a46103de4c8495206800e46b6c580fd55481f597dd419a1c" exitCode=0 Feb 26 08:44:51 crc kubenswrapper[4741]: I0226 08:44:51.837521 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerDied","Data":"3d148d4b00303cb4a46103de4c8495206800e46b6c580fd55481f597dd419a1c"} Feb 26 08:44:53 crc kubenswrapper[4741]: I0226 08:44:53.824767 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:44:53 crc kubenswrapper[4741]: E0226 08:44:53.826202 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:44:56 crc kubenswrapper[4741]: I0226 08:44:56.261668 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" containerID="cri-o://6b8e3fb0b791313279051abc8340c71d98ee58f7de4fb3e698b10504d659fde1" gracePeriod=604794 Feb 26 08:44:56 crc kubenswrapper[4741]: I0226 08:44:56.285415 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" containerID="cri-o://c422c2b3fa19c77819312a9588b4b0471d4e3fdcbfb3ffd1e889f490d908d6df" gracePeriod=604795 Feb 26 08:44:56 crc kubenswrapper[4741]: I0226 08:44:56.932094 4741 generic.go:334] "Generic (PLEG): container finished" podID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerID="a2a0c2dafc7fd902fa130ef4c1de1dd8dd721a66e63a462f5589e74a199ef466" exitCode=0 Feb 26 08:44:56 crc kubenswrapper[4741]: I0226 08:44:56.932159 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerDied","Data":"a2a0c2dafc7fd902fa130ef4c1de1dd8dd721a66e63a462f5589e74a199ef466"} Feb 26 08:44:57 crc kubenswrapper[4741]: I0226 08:44:57.722735 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 26 08:44:58 crc kubenswrapper[4741]: I0226 08:44:58.296592 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.138:5671: connect: connection refused" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.726154 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831158 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831407 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831447 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831487 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831720 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dj4j\" (UniqueName: \"kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831808 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.831836 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data\") pod \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\" (UID: \"70d1f040-be3f-4b5c-8094-f4dfefcb6124\") " Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.833916 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.834326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.835825 4741 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.835871 4741 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d1f040-be3f-4b5c-8094-f4dfefcb6124-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.839790 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j" (OuterVolumeSpecName: "kube-api-access-8dj4j") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "kube-api-access-8dj4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.854171 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts" (OuterVolumeSpecName: "scripts") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.873239 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.908204 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.934703 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.939409 4741 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.939440 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.939452 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.939461 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.939473 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dj4j\" (UniqueName: \"kubernetes.io/projected/70d1f040-be3f-4b5c-8094-f4dfefcb6124-kube-api-access-8dj4j\") on node \"crc\" DevicePath \"\"" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.968625 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data" (OuterVolumeSpecName: "config-data") pod "70d1f040-be3f-4b5c-8094-f4dfefcb6124" (UID: "70d1f040-be3f-4b5c-8094-f4dfefcb6124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.979083 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d1f040-be3f-4b5c-8094-f4dfefcb6124","Type":"ContainerDied","Data":"9cf59402e3182d7fef17dd53fe3af7e595653e911e91edc5d067431eb599478c"} Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.979245 4741 scope.go:117] "RemoveContainer" containerID="d47fbf53ac3ae197319c30e13d124c0d724b17842773c321b170f5b200266d87" Feb 26 08:44:59 crc kubenswrapper[4741]: I0226 08:44:59.979674 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.054020 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.054274 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d1f040-be3f-4b5c-8094-f4dfefcb6124-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.071758 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.088600 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:45:00 crc kubenswrapper[4741]: E0226 08:45:00.089431 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="sg-core" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089451 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="sg-core" Feb 26 08:45:00 crc kubenswrapper[4741]: E0226 08:45:00.089499 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-central-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089507 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-central-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: E0226 08:45:00.089515 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-notification-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089521 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-notification-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: E0226 08:45:00.089555 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="proxy-httpd" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089563 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="proxy-httpd" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089868 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-central-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089888 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="sg-core" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089916 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="proxy-httpd" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.089927 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" containerName="ceilometer-notification-agent" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.093532 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.096396 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.096650 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.096797 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.103653 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.169426 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp"] Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.171576 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.174199 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.174432 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.182406 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp"] Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.264433 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5sb\" (UniqueName: \"kubernetes.io/projected/ad35d04e-1800-463f-8059-29fac13e2947-kube-api-access-tw5sb\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.264611 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.264770 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.264830 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-scripts\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.264902 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-run-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.265048 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-config-data\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.265185 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-log-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.265281 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369317 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-run-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369393 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-config-data\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369428 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-log-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369478 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2khm\" (UniqueName: \"kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369514 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369624 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369663 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5sb\" (UniqueName: \"kubernetes.io/projected/ad35d04e-1800-463f-8059-29fac13e2947-kube-api-access-tw5sb\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369730 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369801 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369833 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369866 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-scripts\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.369919 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-run-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.370295 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad35d04e-1800-463f-8059-29fac13e2947-log-httpd\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.373875 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.374957 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-config-data\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.376118 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-scripts\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.377089 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.387124 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad35d04e-1800-463f-8059-29fac13e2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.388947 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5sb\" (UniqueName: \"kubernetes.io/projected/ad35d04e-1800-463f-8059-29fac13e2947-kube-api-access-tw5sb\") pod \"ceilometer-0\" (UID: \"ad35d04e-1800-463f-8059-29fac13e2947\") " pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.443399 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.476469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.476600 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.476732 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2khm\" (UniqueName: \"kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.478089 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.481036 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.504240 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2khm\" (UniqueName: \"kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm\") pod \"collect-profiles-29534925-5wwfp\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:00 crc kubenswrapper[4741]: I0226 08:45:00.795640 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:01 crc kubenswrapper[4741]: I0226 08:45:01.812712 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d1f040-be3f-4b5c-8094-f4dfefcb6124" path="/var/lib/kubelet/pods/70d1f040-be3f-4b5c-8094-f4dfefcb6124/volumes" Feb 26 08:45:03 crc kubenswrapper[4741]: I0226 08:45:03.027796 4741 generic.go:334] "Generic (PLEG): container finished" podID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerID="c422c2b3fa19c77819312a9588b4b0471d4e3fdcbfb3ffd1e889f490d908d6df" exitCode=0 Feb 26 08:45:03 crc kubenswrapper[4741]: I0226 08:45:03.028024 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerDied","Data":"c422c2b3fa19c77819312a9588b4b0471d4e3fdcbfb3ffd1e889f490d908d6df"} Feb 26 08:45:03 crc kubenswrapper[4741]: I0226 08:45:03.032391 4741 generic.go:334] "Generic (PLEG): container finished" podID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerID="6b8e3fb0b791313279051abc8340c71d98ee58f7de4fb3e698b10504d659fde1" exitCode=0 Feb 26 08:45:03 crc kubenswrapper[4741]: I0226 08:45:03.032442 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerDied","Data":"6b8e3fb0b791313279051abc8340c71d98ee58f7de4fb3e698b10504d659fde1"} Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.515244 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.519812 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.528607 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.544911 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.647377 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.647739 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgzc\" (UniqueName: \"kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.647782 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.647866 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.648036 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.648090 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.648163 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.727721 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751010 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751125 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751202 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751247 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751347 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751414 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgzc\" (UniqueName: \"kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.751436 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.752145 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.752319 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.752476 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.753319 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.754773 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.755013 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.789708 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:45:07 crc kubenswrapper[4741]: E0226 08:45:07.790385 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.791897 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgzc\" (UniqueName: \"kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc\") pod \"dnsmasq-dns-68df85789f-lbgdz\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:07 crc kubenswrapper[4741]: I0226 08:45:07.844684 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:08 crc kubenswrapper[4741]: I0226 08:45:08.297036 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.138:5671: connect: connection refused" Feb 26 08:45:09 crc kubenswrapper[4741]: E0226 08:45:09.715383 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 26 08:45:09 crc kubenswrapper[4741]: E0226 08:45:09.715637 4741 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 26 08:45:09 crc kubenswrapper[4741]: E0226 08:45:09.715910 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4x7j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-hng7c_openstack(ee9aa4f4-8a5b-4b08-9708-a83062f9bec3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 08:45:09 crc kubenswrapper[4741]: E0226 08:45:09.717138 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-hng7c" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" Feb 26 08:45:09 crc kubenswrapper[4741]: I0226 08:45:09.743630 4741 scope.go:117] "RemoveContainer" containerID="92da333e762c89114f59e568592075c1ecdf64a6fbe590b1f0b4d48018c7e962" Feb 26 08:45:09 crc kubenswrapper[4741]: I0226 08:45:09.958325 4741 scope.go:117] "RemoveContainer" containerID="a2a0c2dafc7fd902fa130ef4c1de1dd8dd721a66e63a462f5589e74a199ef466" Feb 26 08:45:09 crc kubenswrapper[4741]: I0226 08:45:09.988801 4741 scope.go:117] "RemoveContainer" containerID="3d148d4b00303cb4a46103de4c8495206800e46b6c580fd55481f597dd419a1c" Feb 26 08:45:10 crc kubenswrapper[4741]: E0226 08:45:10.151961 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-hng7c" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.399548 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.406040 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.559363 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.561798 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.561930 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.561979 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.562041 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.562084 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.562114 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.562171 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.562213 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563101 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563195 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563361 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563420 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563454 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563478 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563593 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563630 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rjcr\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563729 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6csx5\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563753 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563809 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563852 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls\") pod \"815578f6-90b1-4afc-91c7-d24a59a11b23\" (UID: \"815578f6-90b1-4afc-91c7-d24a59a11b23\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.563894 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf\") pod \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\" (UID: \"403c217b-d3d9-47a3-8a5a-4f6e917edcad\") " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.566131 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.566661 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.567435 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.582871 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info" (OuterVolumeSpecName: "pod-info") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.589638 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.599896 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.606290 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.612619 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.618486 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info" (OuterVolumeSpecName: "pod-info") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.618531 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5" (OuterVolumeSpecName: "kube-api-access-6csx5") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "kube-api-access-6csx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.618700 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.631232 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.631551 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr" (OuterVolumeSpecName: "kube-api-access-8rjcr") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "kube-api-access-8rjcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.634189 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.643698 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.657390 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data" (OuterVolumeSpecName: "config-data") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674253 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674306 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rjcr\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-kube-api-access-8rjcr\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674326 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6csx5\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-kube-api-access-6csx5\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674341 4741 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674356 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674369 4741 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674382 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674395 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674408 4741 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/403c217b-d3d9-47a3-8a5a-4f6e917edcad-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674425 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674437 4741 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/403c217b-d3d9-47a3-8a5a-4f6e917edcad-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674449 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674463 4741 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/815578f6-90b1-4afc-91c7-d24a59a11b23-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674476 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.674488 4741 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/815578f6-90b1-4afc-91c7-d24a59a11b23-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.688148 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data" (OuterVolumeSpecName: "config-data") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.712322 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp"] Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.722441 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d" (OuterVolumeSpecName: "persistence") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "pvc-079afb70-964f-4a28-ba21-b9d42945983d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.727263 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905" (OuterVolumeSpecName: "persistence") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.730547 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.748248 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf" (OuterVolumeSpecName: "server-conf") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.759867 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf" (OuterVolumeSpecName: "server-conf") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.780951 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.781755 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") on node \"crc\" " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.781781 4741 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/815578f6-90b1-4afc-91c7-d24a59a11b23-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.781802 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") on node \"crc\" " Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.781813 4741 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/403c217b-d3d9-47a3-8a5a-4f6e917edcad-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.849785 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.850252 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-079afb70-964f-4a28-ba21-b9d42945983d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d") on node "crc" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.881646 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.882072 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905") on node "crc" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.885105 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.885150 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.936158 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "815578f6-90b1-4afc-91c7-d24a59a11b23" (UID: "815578f6-90b1-4afc-91c7-d24a59a11b23"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.952991 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "403c217b-d3d9-47a3-8a5a-4f6e917edcad" (UID: "403c217b-d3d9-47a3-8a5a-4f6e917edcad"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.987876 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/403c217b-d3d9-47a3-8a5a-4f6e917edcad-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:10 crc kubenswrapper[4741]: I0226 08:45:10.988052 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/815578f6-90b1-4afc-91c7-d24a59a11b23-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.177092 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.177080 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"403c217b-d3d9-47a3-8a5a-4f6e917edcad","Type":"ContainerDied","Data":"d4d6a418c49651b93af53b4cbaa074ed84577723e3444a12554ac13e7ee9e00b"} Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.177280 4741 scope.go:117] "RemoveContainer" containerID="c422c2b3fa19c77819312a9588b4b0471d4e3fdcbfb3ffd1e889f490d908d6df" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.179845 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"da8f32edea9ecf30684d06e682a531a48700581373f2741b12ca9fcda4c7e990"} Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.182420 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" event={"ID":"36f06202-eab6-4057-a11f-1e003e1f60bc","Type":"ContainerStarted","Data":"b6e0794b11a83ae5a643b66da87b03066a19a707f952ae55dae85cf927e6a074"} Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.187903 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.187887 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"815578f6-90b1-4afc-91c7-d24a59a11b23","Type":"ContainerDied","Data":"03a6d04ad5cacbab95bae5a1cbbd16d5b6029bff4801768920ac7e093d6daa32"} Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.189692 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" event={"ID":"368e20b6-a506-49ea-b015-0f186a5ab756","Type":"ContainerStarted","Data":"2397341e489e95ac2ad066044d416f0d28e1695b0b531b87e5a86be4690bf576"} Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.250240 4741 scope.go:117] "RemoveContainer" containerID="40077b640f4292247aae5f8f0827b3ba522775716ed561f8bb53c5d384790948" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.256666 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.293252 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.311803 4741 scope.go:117] "RemoveContainer" containerID="6b8e3fb0b791313279051abc8340c71d98ee58f7de4fb3e698b10504d659fde1" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.348171 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.372647 4741 scope.go:117] "RemoveContainer" containerID="6dd5410e6ea19da91c248be084d0673d3300b2ab87b7a41e69fd4beec4aa2e91" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.374696 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.396496 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:45:11 crc kubenswrapper[4741]: E0226 08:45:11.398347 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.398383 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: E0226 08:45:11.398405 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="setup-container" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.398417 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="setup-container" Feb 26 08:45:11 crc kubenswrapper[4741]: E0226 08:45:11.398437 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.398449 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: E0226 08:45:11.398488 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="setup-container" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.398497 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="setup-container" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.398973 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.399011 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" containerName="rabbitmq" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.403504 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.419926 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.421963 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.424393 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.435997 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.436219 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.436367 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.436499 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.440861 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2cwkd" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.444529 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.517460 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.527547 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.542002 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.542094 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543029 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543352 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543391 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acd31381-59b4-426e-94f1-57ac13548b26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543483 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acd31381-59b4-426e-94f1-57ac13548b26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543556 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543690 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdq6\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-kube-api-access-pjdq6\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.543967 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.647659 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.653444 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acd31381-59b4-426e-94f1-57ac13548b26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.653966 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.656175 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.656356 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.656556 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-config-data\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.656719 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.656857 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdq6\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-kube-api-access-pjdq6\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.657132 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.657258 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.657359 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.657653 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.658897 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.657682 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.659504 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.661832 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fz7\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-kube-api-access-r5fz7\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.661858 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.662414 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663031 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acd31381-59b4-426e-94f1-57ac13548b26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663294 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663525 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663637 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663865 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.663910 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acd31381-59b4-426e-94f1-57ac13548b26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.664027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.664140 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.664340 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.664401 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acd31381-59b4-426e-94f1-57ac13548b26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.665139 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.665543 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acd31381-59b4-426e-94f1-57ac13548b26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.680263 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acd31381-59b4-426e-94f1-57ac13548b26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.681186 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.681249 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4a48402aa3ae82a7c0531ca5bb6b953c670f31e961f44e8dfbb6dc451a9362d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.684004 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdq6\" (UniqueName: \"kubernetes.io/projected/acd31381-59b4-426e-94f1-57ac13548b26-kube-api-access-pjdq6\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769210 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769674 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769781 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769924 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fz7\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-kube-api-access-r5fz7\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.769941 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770061 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770109 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770198 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770400 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.770430 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-config-data\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.771350 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-config-data\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.772413 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.773270 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.774738 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.774779 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b25ae51cf46b80f139cc98e7ff2e70fbe8dd51bf375cc8bf062bffee99caeec/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.775179 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.778155 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.778660 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.779375 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.780552 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.797354 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-079afb70-964f-4a28-ba21-b9d42945983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-079afb70-964f-4a28-ba21-b9d42945983d\") pod \"rabbitmq-cell1-server-0\" (UID: \"acd31381-59b4-426e-94f1-57ac13548b26\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.812648 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="403c217b-d3d9-47a3-8a5a-4f6e917edcad" path="/var/lib/kubelet/pods/403c217b-d3d9-47a3-8a5a-4f6e917edcad/volumes" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.814607 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815578f6-90b1-4afc-91c7-d24a59a11b23" path="/var/lib/kubelet/pods/815578f6-90b1-4afc-91c7-d24a59a11b23/volumes" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.837251 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fz7\" (UniqueName: \"kubernetes.io/projected/9f58f56d-176d-4468-ae5a-31e1e7fb48a1-kube-api-access-r5fz7\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:11 crc kubenswrapper[4741]: I0226 08:45:11.909396 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c60e5f-c0c5-4ad2-8649-11a921543905\") pod \"rabbitmq-server-2\" (UID: \"9f58f56d-176d-4468-ae5a-31e1e7fb48a1\") " pod="openstack/rabbitmq-server-2" Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.060610 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:12 crc kubenswrapper[4741]: E0226 08:45:12.062272 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f06202_eab6_4057_a11f_1e003e1f60bc.slice/crio-conmon-c4aed9d81bba654b861b4082f35d098fca9d4e0506889bd476eea050316c3fce.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.191698 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.223569 4741 generic.go:334] "Generic (PLEG): container finished" podID="36f06202-eab6-4057-a11f-1e003e1f60bc" containerID="c4aed9d81bba654b861b4082f35d098fca9d4e0506889bd476eea050316c3fce" exitCode=0 Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.223635 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" event={"ID":"36f06202-eab6-4057-a11f-1e003e1f60bc","Type":"ContainerDied","Data":"c4aed9d81bba654b861b4082f35d098fca9d4e0506889bd476eea050316c3fce"} Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.250617 4741 generic.go:334] "Generic (PLEG): container finished" podID="368e20b6-a506-49ea-b015-0f186a5ab756" containerID="b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9" exitCode=0 Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.250695 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" event={"ID":"368e20b6-a506-49ea-b015-0f186a5ab756","Type":"ContainerDied","Data":"b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9"} Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.658840 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 08:45:12 crc kubenswrapper[4741]: I0226 08:45:12.880379 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 08:45:13 crc kubenswrapper[4741]: I0226 08:45:13.271823 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" event={"ID":"368e20b6-a506-49ea-b015-0f186a5ab756","Type":"ContainerStarted","Data":"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1"} Feb 26 08:45:13 crc kubenswrapper[4741]: I0226 08:45:13.298455 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" podStartSLOduration=6.298423946 podStartE2EDuration="6.298423946s" podCreationTimestamp="2026-02-26 08:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:45:13.293517007 +0000 UTC m=+1948.289454394" watchObservedRunningTime="2026-02-26 08:45:13.298423946 +0000 UTC m=+1948.294361333" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.291032 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:14 crc kubenswrapper[4741]: W0226 08:45:14.309191 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacd31381_59b4_426e_94f1_57ac13548b26.slice/crio-fa8155db7c862793be30b569c2c52bd54b49de5d79a1888e81e2b17622959413 WatchSource:0}: Error finding container fa8155db7c862793be30b569c2c52bd54b49de5d79a1888e81e2b17622959413: Status 404 returned error can't find the container with id fa8155db7c862793be30b569c2c52bd54b49de5d79a1888e81e2b17622959413 Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.477734 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.579986 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume\") pod \"36f06202-eab6-4057-a11f-1e003e1f60bc\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.580301 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2khm\" (UniqueName: \"kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm\") pod \"36f06202-eab6-4057-a11f-1e003e1f60bc\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.580613 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume\") pod \"36f06202-eab6-4057-a11f-1e003e1f60bc\" (UID: \"36f06202-eab6-4057-a11f-1e003e1f60bc\") " Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.580895 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "36f06202-eab6-4057-a11f-1e003e1f60bc" (UID: "36f06202-eab6-4057-a11f-1e003e1f60bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.582023 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f06202-eab6-4057-a11f-1e003e1f60bc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.587100 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm" (OuterVolumeSpecName: "kube-api-access-p2khm") pod "36f06202-eab6-4057-a11f-1e003e1f60bc" (UID: "36f06202-eab6-4057-a11f-1e003e1f60bc"). InnerVolumeSpecName "kube-api-access-p2khm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.587947 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "36f06202-eab6-4057-a11f-1e003e1f60bc" (UID: "36f06202-eab6-4057-a11f-1e003e1f60bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.685047 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36f06202-eab6-4057-a11f-1e003e1f60bc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:14 crc kubenswrapper[4741]: I0226 08:45:14.685088 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2khm\" (UniqueName: \"kubernetes.io/projected/36f06202-eab6-4057-a11f-1e003e1f60bc-kube-api-access-p2khm\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:14 crc kubenswrapper[4741]: W0226 08:45:14.843318 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f58f56d_176d_4468_ae5a_31e1e7fb48a1.slice/crio-db17e22cfa7cde33447edcf38cde8858c913857c2ce7f0f2d7f2c010a17328da WatchSource:0}: Error finding container db17e22cfa7cde33447edcf38cde8858c913857c2ce7f0f2d7f2c010a17328da: Status 404 returned error can't find the container with id db17e22cfa7cde33447edcf38cde8858c913857c2ce7f0f2d7f2c010a17328da Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.304986 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9f58f56d-176d-4468-ae5a-31e1e7fb48a1","Type":"ContainerStarted","Data":"db17e22cfa7cde33447edcf38cde8858c913857c2ce7f0f2d7f2c010a17328da"} Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.307717 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"76b854d55d6853c7e3a5cd4f44c010876da4ba6844183ab7bb0cba44784eec43"} Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.309899 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" event={"ID":"36f06202-eab6-4057-a11f-1e003e1f60bc","Type":"ContainerDied","Data":"b6e0794b11a83ae5a643b66da87b03066a19a707f952ae55dae85cf927e6a074"} Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.309930 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e0794b11a83ae5a643b66da87b03066a19a707f952ae55dae85cf927e6a074" Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.309940 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp" Feb 26 08:45:15 crc kubenswrapper[4741]: I0226 08:45:15.311583 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acd31381-59b4-426e-94f1-57ac13548b26","Type":"ContainerStarted","Data":"fa8155db7c862793be30b569c2c52bd54b49de5d79a1888e81e2b17622959413"} Feb 26 08:45:17 crc kubenswrapper[4741]: I0226 08:45:17.339857 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"f9d4dfeb12a748d44702efac0fa08174ea6b75c1f030bad1d2a50a41d6917642"} Feb 26 08:45:17 crc kubenswrapper[4741]: I0226 08:45:17.342727 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acd31381-59b4-426e-94f1-57ac13548b26","Type":"ContainerStarted","Data":"8daf402627e18a8adf28279a0d51b893d071c8c53a63ce5909d8458838a167c9"} Feb 26 08:45:17 crc kubenswrapper[4741]: I0226 08:45:17.846346 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:17 crc kubenswrapper[4741]: I0226 08:45:17.929529 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:45:17 crc kubenswrapper[4741]: I0226 08:45:17.929897 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="dnsmasq-dns" containerID="cri-o://33af1e4eec86045af6d96cf3b23e28ff3d21b6e5ec6442c5028cd9c57dd3f076" gracePeriod=10 Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.191567 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-p5dww"] Feb 26 08:45:18 crc kubenswrapper[4741]: E0226 08:45:18.192571 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f06202-eab6-4057-a11f-1e003e1f60bc" containerName="collect-profiles" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.192600 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f06202-eab6-4057-a11f-1e003e1f60bc" containerName="collect-profiles" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.192985 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f06202-eab6-4057-a11f-1e003e1f60bc" containerName="collect-profiles" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.195170 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.241424 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-p5dww"] Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315219 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315314 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-config\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315370 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lff2l\" (UniqueName: \"kubernetes.io/projected/6cf8018e-f0d4-483a-8778-c94aafa4971d-kube-api-access-lff2l\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315434 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-svc\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315466 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315533 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.315574 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.365261 4741 generic.go:334] "Generic (PLEG): container finished" podID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerID="33af1e4eec86045af6d96cf3b23e28ff3d21b6e5ec6442c5028cd9c57dd3f076" exitCode=0 Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.365341 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" event={"ID":"bd5ef4ab-5d3a-41b5-965e-7579768d32b8","Type":"ContainerDied","Data":"33af1e4eec86045af6d96cf3b23e28ff3d21b6e5ec6442c5028cd9c57dd3f076"} Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.369981 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"9103849d7b58088b15532e8cff5f8a383700b40bfc056f61cf2cd33d5c39aa7f"} Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.373081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9f58f56d-176d-4468-ae5a-31e1e7fb48a1","Type":"ContainerStarted","Data":"69e2cc3279b36853222ad6d139e1b8bf4aa19d51058b03e2870d2d1c816ed006"} Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424342 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-config\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424423 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lff2l\" (UniqueName: \"kubernetes.io/projected/6cf8018e-f0d4-483a-8778-c94aafa4971d-kube-api-access-lff2l\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424492 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-svc\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424520 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424606 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424643 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.424766 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.426186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.427586 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-config\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.428193 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.428248 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.428802 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.429487 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf8018e-f0d4-483a-8778-c94aafa4971d-dns-svc\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.470747 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lff2l\" (UniqueName: \"kubernetes.io/projected/6cf8018e-f0d4-483a-8778-c94aafa4971d-kube-api-access-lff2l\") pod \"dnsmasq-dns-bb85b8995-p5dww\" (UID: \"6cf8018e-f0d4-483a-8778-c94aafa4971d\") " pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.535359 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.942901 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.959805 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.959888 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.959953 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blz9d\" (UniqueName: \"kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.960030 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.960069 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:18 crc kubenswrapper[4741]: I0226 08:45:18.960134 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config\") pod \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\" (UID: \"bd5ef4ab-5d3a-41b5-965e-7579768d32b8\") " Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.036672 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d" (OuterVolumeSpecName: "kube-api-access-blz9d") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "kube-api-access-blz9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.066394 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blz9d\" (UniqueName: \"kubernetes.io/projected/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-kube-api-access-blz9d\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.217730 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.249211 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.276929 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config" (OuterVolumeSpecName: "config") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.286299 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.323707 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.323768 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.325206 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.326674 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.368802 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd5ef4ab-5d3a-41b5-965e-7579768d32b8" (UID: "bd5ef4ab-5d3a-41b5-965e-7579768d32b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.411939 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.412342 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-z7vs2" event={"ID":"bd5ef4ab-5d3a-41b5-965e-7579768d32b8","Type":"ContainerDied","Data":"4ca4573fb200790751d87aea0db009635e3c038a41dda53a62d128a52630c942"} Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.413543 4741 scope.go:117] "RemoveContainer" containerID="33af1e4eec86045af6d96cf3b23e28ff3d21b6e5ec6442c5028cd9c57dd3f076" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.432827 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5ef4ab-5d3a-41b5-965e-7579768d32b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.487153 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.489404 4741 scope.go:117] "RemoveContainer" containerID="56f0a7e83ec0e3dd5f751d65a461cd086d3fe389cddbec995fd4a2b39ab138f6" Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.509500 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-z7vs2"] Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.528570 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-p5dww"] Feb 26 08:45:19 crc kubenswrapper[4741]: I0226 08:45:19.820642 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" path="/var/lib/kubelet/pods/bd5ef4ab-5d3a-41b5-965e-7579768d32b8/volumes" Feb 26 08:45:20 crc kubenswrapper[4741]: I0226 08:45:20.434509 4741 generic.go:334] "Generic (PLEG): container finished" podID="6cf8018e-f0d4-483a-8778-c94aafa4971d" containerID="51b25414e8a8c34a84bb830be58b55618ec6278bf95eba73e608b3efe7d5ab6c" exitCode=0 Feb 26 08:45:20 crc kubenswrapper[4741]: I0226 08:45:20.434654 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" event={"ID":"6cf8018e-f0d4-483a-8778-c94aafa4971d","Type":"ContainerDied","Data":"51b25414e8a8c34a84bb830be58b55618ec6278bf95eba73e608b3efe7d5ab6c"} Feb 26 08:45:20 crc kubenswrapper[4741]: I0226 08:45:20.434932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" event={"ID":"6cf8018e-f0d4-483a-8778-c94aafa4971d","Type":"ContainerStarted","Data":"c81a20118f1c4a234e049efc29eb649b9ab04c8d130eb42f6189028ecb9035f8"} Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.458828 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" event={"ID":"6cf8018e-f0d4-483a-8778-c94aafa4971d","Type":"ContainerStarted","Data":"6be4a76af673a47d1b9787e922a6c77a441d3414e9accf3d3bce543552590a51"} Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.460965 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.469771 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"019c48635e0e159325d1710191c51f410b4e525d4718029454e52dec08a558c4"} Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.469997 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.487163 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" podStartSLOduration=3.487137783 podStartE2EDuration="3.487137783s" podCreationTimestamp="2026-02-26 08:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:45:21.483848619 +0000 UTC m=+1956.479786006" watchObservedRunningTime="2026-02-26 08:45:21.487137783 +0000 UTC m=+1956.483075170" Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.533611 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.123689784 podStartE2EDuration="21.533581286s" podCreationTimestamp="2026-02-26 08:45:00 +0000 UTC" firstStartedPulling="2026-02-26 08:45:10.59795062 +0000 UTC m=+1945.593888007" lastFinishedPulling="2026-02-26 08:45:20.007842122 +0000 UTC m=+1955.003779509" observedRunningTime="2026-02-26 08:45:21.51092861 +0000 UTC m=+1956.506866017" watchObservedRunningTime="2026-02-26 08:45:21.533581286 +0000 UTC m=+1956.529518673" Feb 26 08:45:21 crc kubenswrapper[4741]: I0226 08:45:21.788466 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:45:21 crc kubenswrapper[4741]: E0226 08:45:21.789161 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:45:22 crc kubenswrapper[4741]: I0226 08:45:22.489650 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hng7c" event={"ID":"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3","Type":"ContainerStarted","Data":"81ba6762658d3d935a5f05c81677b2a119c163ba2c38a112a52ff6c911fa97ab"} Feb 26 08:45:22 crc kubenswrapper[4741]: I0226 08:45:22.519426 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-hng7c" podStartSLOduration=2.269963353 podStartE2EDuration="35.519403696s" podCreationTimestamp="2026-02-26 08:44:47 +0000 UTC" firstStartedPulling="2026-02-26 08:44:48.791924972 +0000 UTC m=+1923.787862349" lastFinishedPulling="2026-02-26 08:45:22.041365305 +0000 UTC m=+1957.037302692" observedRunningTime="2026-02-26 08:45:22.508688111 +0000 UTC m=+1957.504625498" watchObservedRunningTime="2026-02-26 08:45:22.519403696 +0000 UTC m=+1957.515341083" Feb 26 08:45:25 crc kubenswrapper[4741]: I0226 08:45:25.535773 4741 generic.go:334] "Generic (PLEG): container finished" podID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" containerID="81ba6762658d3d935a5f05c81677b2a119c163ba2c38a112a52ff6c911fa97ab" exitCode=0 Feb 26 08:45:25 crc kubenswrapper[4741]: I0226 08:45:25.535948 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hng7c" event={"ID":"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3","Type":"ContainerDied","Data":"81ba6762658d3d935a5f05c81677b2a119c163ba2c38a112a52ff6c911fa97ab"} Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.090655 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hng7c" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.114407 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data\") pod \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.114903 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x7j9\" (UniqueName: \"kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9\") pod \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.115050 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle\") pod \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\" (UID: \"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3\") " Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.130367 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9" (OuterVolumeSpecName: "kube-api-access-4x7j9") pod "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" (UID: "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3"). InnerVolumeSpecName "kube-api-access-4x7j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.166194 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" (UID: "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.211939 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data" (OuterVolumeSpecName: "config-data") pod "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" (UID: "ee9aa4f4-8a5b-4b08-9708-a83062f9bec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.220082 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x7j9\" (UniqueName: \"kubernetes.io/projected/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-kube-api-access-4x7j9\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.220170 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.220183 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.594046 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hng7c" event={"ID":"ee9aa4f4-8a5b-4b08-9708-a83062f9bec3","Type":"ContainerDied","Data":"7d7773cf41e0d7014488d78c9f9b9922ee0c941a3fdb395d4018ecf488f5b776"} Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.594385 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7773cf41e0d7014488d78c9f9b9922ee0c941a3fdb395d4018ecf488f5b776" Feb 26 08:45:27 crc kubenswrapper[4741]: I0226 08:45:27.594094 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hng7c" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.539280 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-p5dww" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.587656 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6859949d69-rth8q"] Feb 26 08:45:28 crc kubenswrapper[4741]: E0226 08:45:28.588538 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="init" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.588561 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="init" Feb 26 08:45:28 crc kubenswrapper[4741]: E0226 08:45:28.588591 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" containerName="heat-db-sync" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.588599 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" containerName="heat-db-sync" Feb 26 08:45:28 crc kubenswrapper[4741]: E0226 08:45:28.588614 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="dnsmasq-dns" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.588623 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="dnsmasq-dns" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.588873 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5ef4ab-5d3a-41b5-965e-7579768d32b8" containerName="dnsmasq-dns" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.588906 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" containerName="heat-db-sync" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.590236 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.634384 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6859949d69-rth8q"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.657613 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-d4c87cddd-2cr8g"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.660252 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.667548 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwhm\" (UniqueName: \"kubernetes.io/projected/a48be3c0-df67-4f76-af9e-d9679ae9da07-kube-api-access-dxwhm\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.667636 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-public-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.667835 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.667944 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data-custom\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668031 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pr9\" (UniqueName: \"kubernetes.io/projected/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-kube-api-access-h6pr9\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668140 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-internal-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668175 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668200 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data-custom\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668251 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-combined-ca-bundle\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.668374 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-combined-ca-bundle\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.690626 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.691002 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="dnsmasq-dns" containerID="cri-o://b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1" gracePeriod=10 Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.743200 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d4c87cddd-2cr8g"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.769580 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data-custom\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.787515 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pr9\" (UniqueName: \"kubernetes.io/projected/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-kube-api-access-h6pr9\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.787732 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-internal-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.787762 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.787797 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data-custom\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.787875 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-combined-ca-bundle\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.788066 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-combined-ca-bundle\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.788463 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwhm\" (UniqueName: \"kubernetes.io/projected/a48be3c0-df67-4f76-af9e-d9679ae9da07-kube-api-access-dxwhm\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.788540 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-public-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.788784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.785325 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data-custom\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.791796 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-internal-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.803715 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.803810 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-754cb96d-pnhrs"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.808255 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-config-data\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.809031 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-combined-ca-bundle\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.811964 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-combined-ca-bundle\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.820903 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a48be3c0-df67-4f76-af9e-d9679ae9da07-public-tls-certs\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.821301 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.826407 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-config-data-custom\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.829002 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pr9\" (UniqueName: \"kubernetes.io/projected/4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9-kube-api-access-h6pr9\") pod \"heat-engine-6859949d69-rth8q\" (UID: \"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9\") " pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.829473 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-754cb96d-pnhrs"] Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.831090 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwhm\" (UniqueName: \"kubernetes.io/projected/a48be3c0-df67-4f76-af9e-d9679ae9da07-kube-api-access-dxwhm\") pod \"heat-api-d4c87cddd-2cr8g\" (UID: \"a48be3c0-df67-4f76-af9e-d9679ae9da07\") " pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.896607 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-internal-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.896700 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data-custom\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.897276 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.897623 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjp5\" (UniqueName: \"kubernetes.io/projected/801de166-56f0-4a77-b57b-0be437d80ead-kube-api-access-gvjp5\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.897940 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-public-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.898160 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-combined-ca-bundle\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:28 crc kubenswrapper[4741]: I0226 08:45:28.921835 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002088 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-public-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002199 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-combined-ca-bundle\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002261 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-internal-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data-custom\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002370 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.002493 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjp5\" (UniqueName: \"kubernetes.io/projected/801de166-56f0-4a77-b57b-0be437d80ead-kube-api-access-gvjp5\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.010780 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-public-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.018640 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.019210 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-internal-tls-certs\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.020320 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-config-data-custom\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.020463 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801de166-56f0-4a77-b57b-0be437d80ead-combined-ca-bundle\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.022473 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.024257 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjp5\" (UniqueName: \"kubernetes.io/projected/801de166-56f0-4a77-b57b-0be437d80ead-kube-api-access-gvjp5\") pod \"heat-cfnapi-754cb96d-pnhrs\" (UID: \"801de166-56f0-4a77-b57b-0be437d80ead\") " pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.230601 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.372514 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.523173 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.523461 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.523525 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.523556 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.523622 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgzc\" (UniqueName: \"kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.524507 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.524526 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0\") pod \"368e20b6-a506-49ea-b015-0f186a5ab756\" (UID: \"368e20b6-a506-49ea-b015-0f186a5ab756\") " Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.536955 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc" (OuterVolumeSpecName: "kube-api-access-wzgzc") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "kube-api-access-wzgzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.625300 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6859949d69-rth8q"] Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.631256 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgzc\" (UniqueName: \"kubernetes.io/projected/368e20b6-a506-49ea-b015-0f186a5ab756-kube-api-access-wzgzc\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.636000 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: W0226 08:45:29.638012 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cac2933_9bb3_4dc7_9c8d_8e738d59c6a9.slice/crio-45e1bdd3b72b290f733411df475a4896553506f6661e0df3496129c07221da24 WatchSource:0}: Error finding container 45e1bdd3b72b290f733411df475a4896553506f6661e0df3496129c07221da24: Status 404 returned error can't find the container with id 45e1bdd3b72b290f733411df475a4896553506f6661e0df3496129c07221da24 Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.677126 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.686964 4741 generic.go:334] "Generic (PLEG): container finished" podID="368e20b6-a506-49ea-b015-0f186a5ab756" containerID="b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1" exitCode=0 Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.687302 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.687331 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" event={"ID":"368e20b6-a506-49ea-b015-0f186a5ab756","Type":"ContainerDied","Data":"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1"} Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.696788 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-lbgdz" event={"ID":"368e20b6-a506-49ea-b015-0f186a5ab756","Type":"ContainerDied","Data":"2397341e489e95ac2ad066044d416f0d28e1695b0b531b87e5a86be4690bf576"} Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.696859 4741 scope.go:117] "RemoveContainer" containerID="b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.710934 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.725941 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config" (OuterVolumeSpecName: "config") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.740022 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.740065 4741 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.740075 4741 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-config\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.740085 4741 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.745409 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.758621 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "368e20b6-a506-49ea-b015-0f186a5ab756" (UID: "368e20b6-a506-49ea-b015-0f186a5ab756"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.799317 4741 scope.go:117] "RemoveContainer" containerID="b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.852425 4741 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.852459 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/368e20b6-a506-49ea-b015-0f186a5ab756-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:29 crc kubenswrapper[4741]: I0226 08:45:29.904562 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d4c87cddd-2cr8g"] Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.045171 4741 scope.go:117] "RemoveContainer" containerID="b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1" Feb 26 08:45:30 crc kubenswrapper[4741]: E0226 08:45:30.045832 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1\": container with ID starting with b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1 not found: ID does not exist" containerID="b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.045885 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1"} err="failed to get container status \"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1\": rpc error: code = NotFound desc = could not find container \"b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1\": container with ID starting with b09cc6d2b171a23a66f4168f0330c5a3d5344a70eeb44c1e3aacf5170dde0fc1 not found: ID does not exist" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.045929 4741 scope.go:117] "RemoveContainer" containerID="b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9" Feb 26 08:45:30 crc kubenswrapper[4741]: E0226 08:45:30.046463 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9\": container with ID starting with b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9 not found: ID does not exist" containerID="b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.046515 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9"} err="failed to get container status \"b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9\": rpc error: code = NotFound desc = could not find container \"b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9\": container with ID starting with b686d7ee278690234d715a6a5bcccf899182c0c5a8fbf6a4b558ed37816690f9 not found: ID does not exist" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.059789 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-754cb96d-pnhrs"] Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.214172 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.235578 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-lbgdz"] Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.476579 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.711552 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d4c87cddd-2cr8g" event={"ID":"a48be3c0-df67-4f76-af9e-d9679ae9da07","Type":"ContainerStarted","Data":"1845ed9ab3792025bd6f155980c373b381d6ad1f00ad002350cfff38b15f79ae"} Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.713799 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6859949d69-rth8q" event={"ID":"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9","Type":"ContainerStarted","Data":"cb161ecea3ee7dfd2608835cb7e2636fab65e9ada626a214f2aaf978b1c330d7"} Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.713835 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.713851 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6859949d69-rth8q" event={"ID":"4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9","Type":"ContainerStarted","Data":"45e1bdd3b72b290f733411df475a4896553506f6661e0df3496129c07221da24"} Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.721562 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754cb96d-pnhrs" event={"ID":"801de166-56f0-4a77-b57b-0be437d80ead","Type":"ContainerStarted","Data":"75310e33d2a07cf29cc469ce4ddd605882602739910e1c1f99e92a14f42a1bab"} Feb 26 08:45:30 crc kubenswrapper[4741]: I0226 08:45:30.755100 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6859949d69-rth8q" podStartSLOduration=2.755050508 podStartE2EDuration="2.755050508s" podCreationTimestamp="2026-02-26 08:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:45:30.738991841 +0000 UTC m=+1965.734929228" watchObservedRunningTime="2026-02-26 08:45:30.755050508 +0000 UTC m=+1965.750987895" Feb 26 08:45:31 crc kubenswrapper[4741]: I0226 08:45:31.802576 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" path="/var/lib/kubelet/pods/368e20b6-a506-49ea-b015-0f186a5ab756/volumes" Feb 26 08:45:32 crc kubenswrapper[4741]: I0226 08:45:32.789401 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:45:32 crc kubenswrapper[4741]: E0226 08:45:32.790101 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.805873 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.807629 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754cb96d-pnhrs" event={"ID":"801de166-56f0-4a77-b57b-0be437d80ead","Type":"ContainerStarted","Data":"d030874213318457204559588d155604f67c7e3fde87b4978b7e9ea2e5abe9da"} Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.807762 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d4c87cddd-2cr8g" event={"ID":"a48be3c0-df67-4f76-af9e-d9679ae9da07","Type":"ContainerStarted","Data":"53ce01688a71560300caa823fe74114fd64b119434f9740126a82636d2fd1803"} Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.807856 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.822896 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-754cb96d-pnhrs" podStartSLOduration=3.568014793 podStartE2EDuration="5.822861092s" podCreationTimestamp="2026-02-26 08:45:28 +0000 UTC" firstStartedPulling="2026-02-26 08:45:30.062123055 +0000 UTC m=+1965.058060442" lastFinishedPulling="2026-02-26 08:45:32.316969354 +0000 UTC m=+1967.312906741" observedRunningTime="2026-02-26 08:45:33.811478598 +0000 UTC m=+1968.807415985" watchObservedRunningTime="2026-02-26 08:45:33.822861092 +0000 UTC m=+1968.818798499" Feb 26 08:45:33 crc kubenswrapper[4741]: I0226 08:45:33.845160 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-d4c87cddd-2cr8g" podStartSLOduration=3.463648631 podStartE2EDuration="5.845128827s" podCreationTimestamp="2026-02-26 08:45:28 +0000 UTC" firstStartedPulling="2026-02-26 08:45:29.922140757 +0000 UTC m=+1964.918078144" lastFinishedPulling="2026-02-26 08:45:32.303620953 +0000 UTC m=+1967.299558340" observedRunningTime="2026-02-26 08:45:33.835549604 +0000 UTC m=+1968.831486991" watchObservedRunningTime="2026-02-26 08:45:33.845128827 +0000 UTC m=+1968.841066224" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.990179 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz"] Feb 26 08:45:38 crc kubenswrapper[4741]: E0226 08:45:38.991191 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="dnsmasq-dns" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.991210 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="dnsmasq-dns" Feb 26 08:45:38 crc kubenswrapper[4741]: E0226 08:45:38.991281 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="init" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.991293 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="init" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.991647 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="368e20b6-a506-49ea-b015-0f186a5ab756" containerName="dnsmasq-dns" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.992800 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.997277 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.997586 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.998059 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:45:38 crc kubenswrapper[4741]: I0226 08:45:38.999362 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.024952 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz"] Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.174283 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.174419 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.174680 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.174761 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76gt\" (UniqueName: \"kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.277591 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.277736 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q76gt\" (UniqueName: \"kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.277925 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.278008 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.287824 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.287846 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.289056 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.301865 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q76gt\" (UniqueName: \"kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:39 crc kubenswrapper[4741]: I0226 08:45:39.317863 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:45:40 crc kubenswrapper[4741]: I0226 08:45:40.857426 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-d4c87cddd-2cr8g" Feb 26 08:45:40 crc kubenswrapper[4741]: I0226 08:45:40.941094 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:45:40 crc kubenswrapper[4741]: I0226 08:45:40.941776 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-64468c668c-bhzvw" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" containerID="cri-o://b9e8eae684854b645369f5bf2287914c87874e87ec849e5b30254b7ae7b0c11d" gracePeriod=60 Feb 26 08:45:41 crc kubenswrapper[4741]: I0226 08:45:41.007344 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-754cb96d-pnhrs" Feb 26 08:45:41 crc kubenswrapper[4741]: I0226 08:45:41.089316 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:45:41 crc kubenswrapper[4741]: I0226 08:45:41.089584 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-875bfc755-9ndh4" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" containerID="cri-o://16d8083239d78c9048e36a510d693abd7cde5e02f5669c0a3c01d5dbe0d28a06" gracePeriod=60 Feb 26 08:45:41 crc kubenswrapper[4741]: I0226 08:45:41.694143 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz"] Feb 26 08:45:41 crc kubenswrapper[4741]: W0226 08:45:41.703165 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73f4159_15a0_40ca_b09a_903cb04c34d9.slice/crio-e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca WatchSource:0}: Error finding container e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca: Status 404 returned error can't find the container with id e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca Feb 26 08:45:41 crc kubenswrapper[4741]: I0226 08:45:41.938927 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" event={"ID":"e73f4159-15a0-40ca-b09a-903cb04c34d9","Type":"ContainerStarted","Data":"e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca"} Feb 26 08:45:44 crc kubenswrapper[4741]: I0226 08:45:44.461780 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-64468c668c-bhzvw" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.236:8004/healthcheck\": read tcp 10.217.0.2:55256->10.217.0.236:8004: read: connection reset by peer" Feb 26 08:45:44 crc kubenswrapper[4741]: I0226 08:45:44.515477 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-875bfc755-9ndh4" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.237:8000/healthcheck\": read tcp 10.217.0.2:49876->10.217.0.237:8000: read: connection reset by peer" Feb 26 08:45:45 crc kubenswrapper[4741]: I0226 08:45:45.004809 4741 generic.go:334] "Generic (PLEG): container finished" podID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerID="b9e8eae684854b645369f5bf2287914c87874e87ec849e5b30254b7ae7b0c11d" exitCode=0 Feb 26 08:45:45 crc kubenswrapper[4741]: I0226 08:45:45.004902 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64468c668c-bhzvw" event={"ID":"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a","Type":"ContainerDied","Data":"b9e8eae684854b645369f5bf2287914c87874e87ec849e5b30254b7ae7b0c11d"} Feb 26 08:45:45 crc kubenswrapper[4741]: I0226 08:45:45.009821 4741 generic.go:334] "Generic (PLEG): container finished" podID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerID="16d8083239d78c9048e36a510d693abd7cde5e02f5669c0a3c01d5dbe0d28a06" exitCode=0 Feb 26 08:45:45 crc kubenswrapper[4741]: I0226 08:45:45.009869 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875bfc755-9ndh4" event={"ID":"47290f7b-69ba-42b3-88c8-cfd13d6009ae","Type":"ContainerDied","Data":"16d8083239d78c9048e36a510d693abd7cde5e02f5669c0a3c01d5dbe0d28a06"} Feb 26 08:45:46 crc kubenswrapper[4741]: I0226 08:45:46.788316 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:45:46 crc kubenswrapper[4741]: E0226 08:45:46.789071 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:45:48 crc kubenswrapper[4741]: I0226 08:45:48.987140 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6859949d69-rth8q" Feb 26 08:45:49 crc kubenswrapper[4741]: I0226 08:45:49.056750 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:45:49 crc kubenswrapper[4741]: I0226 08:45:49.057069 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-654898f896-cnwpl" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" containerID="cri-o://c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" gracePeriod=60 Feb 26 08:45:50 crc kubenswrapper[4741]: E0226 08:45:50.016014 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:45:50 crc kubenswrapper[4741]: E0226 08:45:50.018336 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:45:50 crc kubenswrapper[4741]: E0226 08:45:50.019836 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:45:50 crc kubenswrapper[4741]: E0226 08:45:50.019902 4741 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-654898f896-cnwpl" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" Feb 26 08:45:50 crc kubenswrapper[4741]: I0226 08:45:50.097359 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f58f56d-176d-4468-ae5a-31e1e7fb48a1" containerID="69e2cc3279b36853222ad6d139e1b8bf4aa19d51058b03e2870d2d1c816ed006" exitCode=0 Feb 26 08:45:50 crc kubenswrapper[4741]: I0226 08:45:50.097462 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9f58f56d-176d-4468-ae5a-31e1e7fb48a1","Type":"ContainerDied","Data":"69e2cc3279b36853222ad6d139e1b8bf4aa19d51058b03e2870d2d1c816ed006"} Feb 26 08:45:50 crc kubenswrapper[4741]: I0226 08:45:50.100254 4741 generic.go:334] "Generic (PLEG): container finished" podID="acd31381-59b4-426e-94f1-57ac13548b26" containerID="8daf402627e18a8adf28279a0d51b893d071c8c53a63ce5909d8458838a167c9" exitCode=0 Feb 26 08:45:50 crc kubenswrapper[4741]: I0226 08:45:50.100301 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acd31381-59b4-426e-94f1-57ac13548b26","Type":"ContainerDied","Data":"8daf402627e18a8adf28279a0d51b893d071c8c53a63ce5909d8458838a167c9"} Feb 26 08:45:50 crc kubenswrapper[4741]: I0226 08:45:50.546658 4741 scope.go:117] "RemoveContainer" containerID="d78751a0e72f35abb6bc147d483c0ff3c66ce29e5d4203c12e0edceebcab8a95" Feb 26 08:45:55 crc kubenswrapper[4741]: I0226 08:45:55.827948 4741 scope.go:117] "RemoveContainer" containerID="ebc8747182dca40700dcefb35fbf6229f718e2eff5b67a4ca0928b401692199d" Feb 26 08:45:55 crc kubenswrapper[4741]: I0226 08:45:55.980503 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:45:55 crc kubenswrapper[4741]: I0226 08:45:55.988554 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.051235 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t92m\" (UniqueName: \"kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.051494 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.051694 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.051844 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.052303 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.052425 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.052537 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs22z\" (UniqueName: \"kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.052801 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.055910 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.056091 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data\") pod \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\" (UID: \"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.056800 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.057186 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle\") pod \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\" (UID: \"47290f7b-69ba-42b3-88c8-cfd13d6009ae\") " Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.063234 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m" (OuterVolumeSpecName: "kube-api-access-8t92m") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "kube-api-access-8t92m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.064510 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t92m\" (UniqueName: \"kubernetes.io/projected/47290f7b-69ba-42b3-88c8-cfd13d6009ae-kube-api-access-8t92m\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.071815 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.073482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z" (OuterVolumeSpecName: "kube-api-access-bs22z") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "kube-api-access-bs22z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.090297 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.167971 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.168009 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.168019 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs22z\" (UniqueName: \"kubernetes.io/projected/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-kube-api-access-bs22z\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.179905 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.208377 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875bfc755-9ndh4" event={"ID":"47290f7b-69ba-42b3-88c8-cfd13d6009ae","Type":"ContainerDied","Data":"2695a1e45fa7eec2e29e9ef76cb7b702097909a15d6549ce1fe8878d1185fd64"} Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.208462 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875bfc755-9ndh4" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.208877 4741 scope.go:117] "RemoveContainer" containerID="16d8083239d78c9048e36a510d693abd7cde5e02f5669c0a3c01d5dbe0d28a06" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.212082 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64468c668c-bhzvw" event={"ID":"8b8d5a7c-1557-4ac9-b14b-bc84e84b925a","Type":"ContainerDied","Data":"37b9586378ba8c814f40327b0fec98b0e24dfb98155e969ecdbd858a8d752b80"} Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.212311 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64468c668c-bhzvw" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.257541 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.271624 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.271658 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.273065 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.291253 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.292363 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data" (OuterVolumeSpecName: "config-data") pod "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" (UID: "8b8d5a7c-1557-4ac9-b14b-bc84e84b925a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.294191 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.295057 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data" (OuterVolumeSpecName: "config-data") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.309251 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "47290f7b-69ba-42b3-88c8-cfd13d6009ae" (UID: "47290f7b-69ba-42b3-88c8-cfd13d6009ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376867 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376908 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376920 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376932 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47290f7b-69ba-42b3-88c8-cfd13d6009ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376941 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.376950 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.494724 4741 scope.go:117] "RemoveContainer" containerID="b9e8eae684854b645369f5bf2287914c87874e87ec849e5b30254b7ae7b0c11d" Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.580168 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.601004 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-875bfc755-9ndh4"] Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.614170 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:45:56 crc kubenswrapper[4741]: I0226 08:45:56.630102 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-64468c668c-bhzvw"] Feb 26 08:45:56 crc kubenswrapper[4741]: E0226 08:45:56.762661 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 26 08:45:56 crc kubenswrapper[4741]: E0226 08:45:56.763216 4741 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 08:45:56 crc kubenswrapper[4741]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 26 08:45:56 crc kubenswrapper[4741]: - hosts: all Feb 26 08:45:56 crc kubenswrapper[4741]: strategy: linear Feb 26 08:45:56 crc kubenswrapper[4741]: tasks: Feb 26 08:45:56 crc kubenswrapper[4741]: - name: Enable podified-repos Feb 26 08:45:56 crc kubenswrapper[4741]: become: true Feb 26 08:45:56 crc kubenswrapper[4741]: ansible.builtin.shell: | Feb 26 08:45:56 crc kubenswrapper[4741]: set -euxo pipefail Feb 26 08:45:56 crc kubenswrapper[4741]: pushd /var/tmp Feb 26 08:45:56 crc kubenswrapper[4741]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 26 08:45:56 crc kubenswrapper[4741]: pushd repo-setup-main Feb 26 08:45:56 crc kubenswrapper[4741]: python3 -m venv ./venv Feb 26 08:45:56 crc kubenswrapper[4741]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 26 08:45:56 crc kubenswrapper[4741]: ./venv/bin/repo-setup current-podified -b antelope Feb 26 08:45:56 crc kubenswrapper[4741]: popd Feb 26 08:45:56 crc kubenswrapper[4741]: rm -rf repo-setup-main Feb 26 08:45:56 crc kubenswrapper[4741]: Feb 26 08:45:56 crc kubenswrapper[4741]: Feb 26 08:45:56 crc kubenswrapper[4741]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 26 08:45:56 crc kubenswrapper[4741]: edpm_override_hosts: openstack-edpm-ipam Feb 26 08:45:56 crc kubenswrapper[4741]: edpm_service_type: repo-setup Feb 26 08:45:56 crc kubenswrapper[4741]: Feb 26 08:45:56 crc kubenswrapper[4741]: Feb 26 08:45:56 crc kubenswrapper[4741]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q76gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz_openstack(e73f4159-15a0-40ca-b09a-903cb04c34d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 26 08:45:56 crc kubenswrapper[4741]: > logger="UnhandledError" Feb 26 08:45:56 crc kubenswrapper[4741]: E0226 08:45:56.764413 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" podUID="e73f4159-15a0-40ca-b09a-903cb04c34d9" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.241175 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acd31381-59b4-426e-94f1-57ac13548b26","Type":"ContainerStarted","Data":"adca398709aa659feaf9e0c9b9b44abc76d51ede7725e9a8d52fd6a42dba88ba"} Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.242981 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.247286 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9f58f56d-176d-4468-ae5a-31e1e7fb48a1","Type":"ContainerStarted","Data":"b2b563d355a11b72c8c7808cd44ac0a6042d66095089a6fc7017f04e854aa227"} Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.248336 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 26 08:45:57 crc kubenswrapper[4741]: E0226 08:45:57.251896 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" podUID="e73f4159-15a0-40ca-b09a-903cb04c34d9" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.282608 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.282572123 podStartE2EDuration="46.282572123s" podCreationTimestamp="2026-02-26 08:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:45:57.266558837 +0000 UTC m=+1992.262496244" watchObservedRunningTime="2026-02-26 08:45:57.282572123 +0000 UTC m=+1992.278509510" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.336890 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=46.33685572 podStartE2EDuration="46.33685572s" podCreationTimestamp="2026-02-26 08:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:45:57.325957179 +0000 UTC m=+1992.321894566" watchObservedRunningTime="2026-02-26 08:45:57.33685572 +0000 UTC m=+1992.332793107" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.810053 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" path="/var/lib/kubelet/pods/47290f7b-69ba-42b3-88c8-cfd13d6009ae/volumes" Feb 26 08:45:57 crc kubenswrapper[4741]: I0226 08:45:57.810741 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" path="/var/lib/kubelet/pods/8b8d5a7c-1557-4ac9-b14b-bc84e84b925a/volumes" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.151787 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-m2b27"] Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.167998 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-m2b27"] Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.441731 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pcr97"] Feb 26 08:45:58 crc kubenswrapper[4741]: E0226 08:45:58.442582 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.442614 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" Feb 26 08:45:58 crc kubenswrapper[4741]: E0226 08:45:58.442646 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.442655 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.443016 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.443046 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.444462 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.450146 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.464493 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pcr97"] Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.548145 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.548289 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.548329 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.548379 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchtz\" (UniqueName: \"kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.651614 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.651681 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.651706 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qchtz\" (UniqueName: \"kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.651881 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.663849 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.664034 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.664558 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.675676 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qchtz\" (UniqueName: \"kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz\") pod \"aodh-db-sync-pcr97\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.788626 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pcr97" Feb 26 08:45:58 crc kubenswrapper[4741]: I0226 08:45:58.972566 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-64468c668c-bhzvw" podUID="8b8d5a7c-1557-4ac9-b14b-bc84e84b925a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.236:8004/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 08:45:59 crc kubenswrapper[4741]: I0226 08:45:59.090218 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-875bfc755-9ndh4" podUID="47290f7b-69ba-42b3-88c8-cfd13d6009ae" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.237:8000/healthcheck\": dial tcp 10.217.0.237:8000: i/o timeout" Feb 26 08:45:59 crc kubenswrapper[4741]: W0226 08:45:59.474333 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c215d58_07de_43c3_b0ec_ecade20263dd.slice/crio-cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60 WatchSource:0}: Error finding container cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60: Status 404 returned error can't find the container with id cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60 Feb 26 08:45:59 crc kubenswrapper[4741]: I0226 08:45:59.474926 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pcr97"] Feb 26 08:45:59 crc kubenswrapper[4741]: I0226 08:45:59.788053 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:45:59 crc kubenswrapper[4741]: I0226 08:45:59.802929 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153" path="/var/lib/kubelet/pods/7e2d3d2a-ca82-46f2-99fa-bfa2d2a5c153/volumes" Feb 26 08:46:00 crc kubenswrapper[4741]: E0226 08:46:00.014899 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:46:00 crc kubenswrapper[4741]: E0226 08:46:00.021227 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:46:00 crc kubenswrapper[4741]: E0226 08:46:00.023267 4741 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 08:46:00 crc kubenswrapper[4741]: E0226 08:46:00.023313 4741 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-654898f896-cnwpl" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.160851 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534926-9zsrt"] Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.163457 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.167099 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.167099 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.167148 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.186513 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534926-9zsrt"] Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.307099 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kb2\" (UniqueName: \"kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2\") pod \"auto-csr-approver-29534926-9zsrt\" (UID: \"2cd25174-ea65-487c-841f-9055a74a398f\") " pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.307700 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pcr97" event={"ID":"5c215d58-07de-43c3-b0ec-ecade20263dd","Type":"ContainerStarted","Data":"cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60"} Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.311654 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c"} Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.410450 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kb2\" (UniqueName: \"kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2\") pod \"auto-csr-approver-29534926-9zsrt\" (UID: \"2cd25174-ea65-487c-841f-9055a74a398f\") " pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.458677 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kb2\" (UniqueName: \"kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2\") pod \"auto-csr-approver-29534926-9zsrt\" (UID: \"2cd25174-ea65-487c-841f-9055a74a398f\") " pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:00 crc kubenswrapper[4741]: I0226 08:46:00.495161 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:02 crc kubenswrapper[4741]: I0226 08:46:02.461552 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534926-9zsrt"] Feb 26 08:46:02 crc kubenswrapper[4741]: W0226 08:46:02.467752 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cd25174_ea65_487c_841f_9055a74a398f.slice/crio-c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb WatchSource:0}: Error finding container c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb: Status 404 returned error can't find the container with id c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb Feb 26 08:46:03 crc kubenswrapper[4741]: I0226 08:46:03.376451 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" event={"ID":"2cd25174-ea65-487c-841f-9055a74a398f","Type":"ContainerStarted","Data":"c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb"} Feb 26 08:46:06 crc kubenswrapper[4741]: I0226 08:46:06.439782 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" event={"ID":"2cd25174-ea65-487c-841f-9055a74a398f","Type":"ContainerStarted","Data":"b1e1ba51f483d3e3eb5e0897946950859f8a93537a6f34d9e4bea5c122f84dba"} Feb 26 08:46:06 crc kubenswrapper[4741]: I0226 08:46:06.443127 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pcr97" event={"ID":"5c215d58-07de-43c3-b0ec-ecade20263dd","Type":"ContainerStarted","Data":"80f56828da29c9874fd504816e4b5cd8d07bf891734037266025f87c8f837c7c"} Feb 26 08:46:06 crc kubenswrapper[4741]: I0226 08:46:06.472716 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pcr97" podStartSLOduration=2.419744222 podStartE2EDuration="8.472689273s" podCreationTimestamp="2026-02-26 08:45:58 +0000 UTC" firstStartedPulling="2026-02-26 08:45:59.479377248 +0000 UTC m=+1994.475314635" lastFinishedPulling="2026-02-26 08:46:05.532322309 +0000 UTC m=+2000.528259686" observedRunningTime="2026-02-26 08:46:06.469499002 +0000 UTC m=+2001.465436399" watchObservedRunningTime="2026-02-26 08:46:06.472689273 +0000 UTC m=+2001.468626660" Feb 26 08:46:06 crc kubenswrapper[4741]: I0226 08:46:06.501124 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" podStartSLOduration=3.439552368 podStartE2EDuration="6.501083202s" podCreationTimestamp="2026-02-26 08:46:00 +0000 UTC" firstStartedPulling="2026-02-26 08:46:02.47168179 +0000 UTC m=+1997.467619177" lastFinishedPulling="2026-02-26 08:46:05.533212624 +0000 UTC m=+2000.529150011" observedRunningTime="2026-02-26 08:46:06.488794722 +0000 UTC m=+2001.484732109" watchObservedRunningTime="2026-02-26 08:46:06.501083202 +0000 UTC m=+2001.497020589" Feb 26 08:46:08 crc kubenswrapper[4741]: I0226 08:46:08.994426 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.081685 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data\") pod \"fdf44a23-6035-426e-b4ab-dc1bccedd505\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.081808 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle\") pod \"fdf44a23-6035-426e-b4ab-dc1bccedd505\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.081883 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom\") pod \"fdf44a23-6035-426e-b4ab-dc1bccedd505\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.082049 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpfs\" (UniqueName: \"kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs\") pod \"fdf44a23-6035-426e-b4ab-dc1bccedd505\" (UID: \"fdf44a23-6035-426e-b4ab-dc1bccedd505\") " Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.089267 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fdf44a23-6035-426e-b4ab-dc1bccedd505" (UID: "fdf44a23-6035-426e-b4ab-dc1bccedd505"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.099518 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs" (OuterVolumeSpecName: "kube-api-access-kxpfs") pod "fdf44a23-6035-426e-b4ab-dc1bccedd505" (UID: "fdf44a23-6035-426e-b4ab-dc1bccedd505"). InnerVolumeSpecName "kube-api-access-kxpfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.134610 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdf44a23-6035-426e-b4ab-dc1bccedd505" (UID: "fdf44a23-6035-426e-b4ab-dc1bccedd505"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.186653 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpfs\" (UniqueName: \"kubernetes.io/projected/fdf44a23-6035-426e-b4ab-dc1bccedd505-kube-api-access-kxpfs\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.186963 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.186977 4741 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.193223 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data" (OuterVolumeSpecName: "config-data") pod "fdf44a23-6035-426e-b4ab-dc1bccedd505" (UID: "fdf44a23-6035-426e-b4ab-dc1bccedd505"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.289853 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf44a23-6035-426e-b4ab-dc1bccedd505-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.505323 4741 generic.go:334] "Generic (PLEG): container finished" podID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" exitCode=0 Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.505504 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-654898f896-cnwpl" event={"ID":"fdf44a23-6035-426e-b4ab-dc1bccedd505","Type":"ContainerDied","Data":"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648"} Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.505555 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-654898f896-cnwpl" event={"ID":"fdf44a23-6035-426e-b4ab-dc1bccedd505","Type":"ContainerDied","Data":"375b941d713851effd24a32ade82beeff2ee6d1779a2c537337c3e447bcffc68"} Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.505592 4741 scope.go:117] "RemoveContainer" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.505902 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-654898f896-cnwpl" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.510446 4741 generic.go:334] "Generic (PLEG): container finished" podID="2cd25174-ea65-487c-841f-9055a74a398f" containerID="b1e1ba51f483d3e3eb5e0897946950859f8a93537a6f34d9e4bea5c122f84dba" exitCode=0 Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.510527 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" event={"ID":"2cd25174-ea65-487c-841f-9055a74a398f","Type":"ContainerDied","Data":"b1e1ba51f483d3e3eb5e0897946950859f8a93537a6f34d9e4bea5c122f84dba"} Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.574056 4741 scope.go:117] "RemoveContainer" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" Feb 26 08:46:09 crc kubenswrapper[4741]: E0226 08:46:09.578396 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648\": container with ID starting with c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648 not found: ID does not exist" containerID="c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.578493 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648"} err="failed to get container status \"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648\": rpc error: code = NotFound desc = could not find container \"c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648\": container with ID starting with c3e04928bfe9702706dbe8a61b4231982b1b3ce317ac4058170db5c48ab5d648 not found: ID does not exist" Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.604170 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.616643 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-654898f896-cnwpl"] Feb 26 08:46:09 crc kubenswrapper[4741]: I0226 08:46:09.803247 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" path="/var/lib/kubelet/pods/fdf44a23-6035-426e-b4ab-dc1bccedd505/volumes" Feb 26 08:46:10 crc kubenswrapper[4741]: I0226 08:46:10.542963 4741 generic.go:334] "Generic (PLEG): container finished" podID="5c215d58-07de-43c3-b0ec-ecade20263dd" containerID="80f56828da29c9874fd504816e4b5cd8d07bf891734037266025f87c8f837c7c" exitCode=0 Feb 26 08:46:10 crc kubenswrapper[4741]: I0226 08:46:10.543085 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pcr97" event={"ID":"5c215d58-07de-43c3-b0ec-ecade20263dd","Type":"ContainerDied","Data":"80f56828da29c9874fd504816e4b5cd8d07bf891734037266025f87c8f837c7c"} Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.139283 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.168917 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kb2\" (UniqueName: \"kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2\") pod \"2cd25174-ea65-487c-841f-9055a74a398f\" (UID: \"2cd25174-ea65-487c-841f-9055a74a398f\") " Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.187591 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2" (OuterVolumeSpecName: "kube-api-access-d5kb2") pod "2cd25174-ea65-487c-841f-9055a74a398f" (UID: "2cd25174-ea65-487c-841f-9055a74a398f"). InnerVolumeSpecName "kube-api-access-d5kb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.273176 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kb2\" (UniqueName: \"kubernetes.io/projected/2cd25174-ea65-487c-841f-9055a74a398f-kube-api-access-d5kb2\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.571012 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.571045 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534926-9zsrt" event={"ID":"2cd25174-ea65-487c-841f-9055a74a398f","Type":"ContainerDied","Data":"c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb"} Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.571499 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9843744e7fadf1de6d7f46097c0e662e9fd188b08711c3335c8f0de786ffcdb" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.607394 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.627420 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534920-nwmvg"] Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.648351 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534920-nwmvg"] Feb 26 08:46:11 crc kubenswrapper[4741]: I0226 08:46:11.812587 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7465b7-f180-4fe8-9c29-3e75da8c867c" path="/var/lib/kubelet/pods/bc7465b7-f180-4fe8-9c29-3e75da8c867c/volumes" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.050438 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pcr97" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.065211 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="acd31381-59b4-426e-94f1-57ac13548b26" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.25:5671: connect: connection refused" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.121257 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qchtz\" (UniqueName: \"kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz\") pod \"5c215d58-07de-43c3-b0ec-ecade20263dd\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.121567 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data\") pod \"5c215d58-07de-43c3-b0ec-ecade20263dd\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.121630 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts\") pod \"5c215d58-07de-43c3-b0ec-ecade20263dd\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.121836 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle\") pod \"5c215d58-07de-43c3-b0ec-ecade20263dd\" (UID: \"5c215d58-07de-43c3-b0ec-ecade20263dd\") " Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.150975 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts" (OuterVolumeSpecName: "scripts") pod "5c215d58-07de-43c3-b0ec-ecade20263dd" (UID: "5c215d58-07de-43c3-b0ec-ecade20263dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.151384 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz" (OuterVolumeSpecName: "kube-api-access-qchtz") pod "5c215d58-07de-43c3-b0ec-ecade20263dd" (UID: "5c215d58-07de-43c3-b0ec-ecade20263dd"). InnerVolumeSpecName "kube-api-access-qchtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.162389 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data" (OuterVolumeSpecName: "config-data") pod "5c215d58-07de-43c3-b0ec-ecade20263dd" (UID: "5c215d58-07de-43c3-b0ec-ecade20263dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.165810 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c215d58-07de-43c3-b0ec-ecade20263dd" (UID: "5c215d58-07de-43c3-b0ec-ecade20263dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.196273 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9f58f56d-176d-4468-ae5a-31e1e7fb48a1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.26:5671: connect: connection refused" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.225196 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qchtz\" (UniqueName: \"kubernetes.io/projected/5c215d58-07de-43c3-b0ec-ecade20263dd-kube-api-access-qchtz\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.225248 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.225261 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.225271 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c215d58-07de-43c3-b0ec-ecade20263dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.598840 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pcr97" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.598826 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pcr97" event={"ID":"5c215d58-07de-43c3-b0ec-ecade20263dd","Type":"ContainerDied","Data":"cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60"} Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.599468 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcd547c7eb9ab95c1b40005dc511976da55fea69472adba3f6072c28379de60" Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.602884 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" event={"ID":"e73f4159-15a0-40ca-b09a-903cb04c34d9","Type":"ContainerStarted","Data":"490a6673ad75bae2595734d627ab6e582b2e832cc94a5a80f4ab48ebcd6b5ef4"} Feb 26 08:46:12 crc kubenswrapper[4741]: I0226 08:46:12.671920 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" podStartSLOduration=4.778977163 podStartE2EDuration="34.67188024s" podCreationTimestamp="2026-02-26 08:45:38 +0000 UTC" firstStartedPulling="2026-02-26 08:45:41.707085921 +0000 UTC m=+1976.703023308" lastFinishedPulling="2026-02-26 08:46:11.599989008 +0000 UTC m=+2006.595926385" observedRunningTime="2026-02-26 08:46:12.638459758 +0000 UTC m=+2007.634397155" watchObservedRunningTime="2026-02-26 08:46:12.67188024 +0000 UTC m=+2007.667817637" Feb 26 08:46:13 crc kubenswrapper[4741]: I0226 08:46:13.571093 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:13 crc kubenswrapper[4741]: I0226 08:46:13.571596 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-api" containerID="cri-o://7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2" gracePeriod=30 Feb 26 08:46:13 crc kubenswrapper[4741]: I0226 08:46:13.571687 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-listener" containerID="cri-o://9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9" gracePeriod=30 Feb 26 08:46:13 crc kubenswrapper[4741]: I0226 08:46:13.571787 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-notifier" containerID="cri-o://ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40" gracePeriod=30 Feb 26 08:46:13 crc kubenswrapper[4741]: I0226 08:46:13.571847 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-evaluator" containerID="cri-o://06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179" gracePeriod=30 Feb 26 08:46:14 crc kubenswrapper[4741]: I0226 08:46:14.636027 4741 generic.go:334] "Generic (PLEG): container finished" podID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerID="06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179" exitCode=0 Feb 26 08:46:14 crc kubenswrapper[4741]: I0226 08:46:14.636362 4741 generic.go:334] "Generic (PLEG): container finished" podID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerID="7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2" exitCode=0 Feb 26 08:46:14 crc kubenswrapper[4741]: I0226 08:46:14.636102 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerDied","Data":"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179"} Feb 26 08:46:14 crc kubenswrapper[4741]: I0226 08:46:14.636406 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerDied","Data":"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2"} Feb 26 08:46:22 crc kubenswrapper[4741]: I0226 08:46:22.063333 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 08:46:22 crc kubenswrapper[4741]: I0226 08:46:22.194434 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 26 08:46:22 crc kubenswrapper[4741]: I0226 08:46:22.273287 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:22 crc kubenswrapper[4741]: I0226 08:46:22.736787 4741 generic.go:334] "Generic (PLEG): container finished" podID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerID="ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40" exitCode=0 Feb 26 08:46:22 crc kubenswrapper[4741]: I0226 08:46:22.736840 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerDied","Data":"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40"} Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.524632 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.723378 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.724331 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzkxn\" (UniqueName: \"kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.724371 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.724465 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.724609 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.724706 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs\") pod \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\" (UID: \"bc9d7088-dbf9-41ac-8e6d-2531330f8934\") " Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.730900 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts" (OuterVolumeSpecName: "scripts") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.732748 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn" (OuterVolumeSpecName: "kube-api-access-nzkxn") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "kube-api-access-nzkxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.754415 4741 generic.go:334] "Generic (PLEG): container finished" podID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerID="9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9" exitCode=0 Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.754480 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerDied","Data":"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9"} Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.754550 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bc9d7088-dbf9-41ac-8e6d-2531330f8934","Type":"ContainerDied","Data":"a2b4b1f4fe449a16d32a1ba089453f1850a1ae81a88520c2d433bd29ef2fca96"} Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.754582 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.754594 4741 scope.go:117] "RemoveContainer" containerID="9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.816609 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.829061 4741 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.829099 4741 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.829124 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzkxn\" (UniqueName: \"kubernetes.io/projected/bc9d7088-dbf9-41ac-8e6d-2531330f8934-kube-api-access-nzkxn\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.898720 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.930757 4741 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:23 crc kubenswrapper[4741]: I0226 08:46:23.969228 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data" (OuterVolumeSpecName: "config-data") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.033231 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.049864 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc9d7088-dbf9-41ac-8e6d-2531330f8934" (UID: "bc9d7088-dbf9-41ac-8e6d-2531330f8934"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.063943 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73f4159_15a0_40ca_b09a_903cb04c34d9.slice/crio-490a6673ad75bae2595734d627ab6e582b2e832cc94a5a80f4ab48ebcd6b5ef4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73f4159_15a0_40ca_b09a_903cb04c34d9.slice/crio-conmon-490a6673ad75bae2595734d627ab6e582b2e832cc94a5a80f4ab48ebcd6b5ef4.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.102828 4741 scope.go:117] "RemoveContainer" containerID="ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.107341 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.137747 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9d7088-dbf9-41ac-8e6d-2531330f8934-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.138339 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.142314 4741 scope.go:117] "RemoveContainer" containerID="06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.154155 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.154903 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-evaluator" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.154923 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-evaluator" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.154948 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-notifier" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.154954 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-notifier" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.154964 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.154971 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.154986 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c215d58-07de-43c3-b0ec-ecade20263dd" containerName="aodh-db-sync" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.154993 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c215d58-07de-43c3-b0ec-ecade20263dd" containerName="aodh-db-sync" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.155003 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-listener" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155010 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-listener" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.155029 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-api" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155035 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-api" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.155079 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd25174-ea65-487c-841f-9055a74a398f" containerName="oc" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155086 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd25174-ea65-487c-841f-9055a74a398f" containerName="oc" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155382 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf44a23-6035-426e-b4ab-dc1bccedd505" containerName="heat-engine" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155406 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c215d58-07de-43c3-b0ec-ecade20263dd" containerName="aodh-db-sync" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155420 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-api" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155431 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-evaluator" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155447 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-listener" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155460 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" containerName="aodh-notifier" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.155481 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd25174-ea65-487c-841f-9055a74a398f" containerName="oc" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.158481 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.166870 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.181226 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.181226 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.181297 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tlszt" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.181347 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.181410 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.227130 4741 scope.go:117] "RemoveContainer" containerID="7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.247911 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-scripts\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.248027 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-config-data\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.248191 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.248287 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-public-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.248528 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-internal-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.248556 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7mq\" (UniqueName: \"kubernetes.io/projected/a889afc5-6db0-4421-a04c-4ea08557d068-kube-api-access-8x7mq\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.262772 4741 scope.go:117] "RemoveContainer" containerID="9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.263318 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9\": container with ID starting with 9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9 not found: ID does not exist" containerID="9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.263363 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9"} err="failed to get container status \"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9\": rpc error: code = NotFound desc = could not find container \"9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9\": container with ID starting with 9ef9754f8fb9ab0ed9b71c4d6749f2170819212727f6ac2b74bd6bff3ad762d9 not found: ID does not exist" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.263391 4741 scope.go:117] "RemoveContainer" containerID="ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.264134 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40\": container with ID starting with ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40 not found: ID does not exist" containerID="ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.264163 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40"} err="failed to get container status \"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40\": rpc error: code = NotFound desc = could not find container \"ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40\": container with ID starting with ab3e9421b95f828cea02cbc3a86cfc8f56c3c459419250faa3976bb735d1eb40 not found: ID does not exist" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.264183 4741 scope.go:117] "RemoveContainer" containerID="06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.264463 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179\": container with ID starting with 06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179 not found: ID does not exist" containerID="06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.264494 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179"} err="failed to get container status \"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179\": rpc error: code = NotFound desc = could not find container \"06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179\": container with ID starting with 06fa3276f76ae9b60f7ef38fe32d88742b02f4f1db93e7d2b583718e4c690179 not found: ID does not exist" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.264515 4741 scope.go:117] "RemoveContainer" containerID="7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2" Feb 26 08:46:24 crc kubenswrapper[4741]: E0226 08:46:24.264763 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2\": container with ID starting with 7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2 not found: ID does not exist" containerID="7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.264790 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2"} err="failed to get container status \"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2\": rpc error: code = NotFound desc = could not find container \"7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2\": container with ID starting with 7b0c28d8d39f68aba771dd5a7d6b388f01ca3ef5a50f41505ee6eb8cbe5ac7c2 not found: ID does not exist" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.352897 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-scripts\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.352980 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-config-data\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.353038 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.353093 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-public-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.353277 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-internal-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.353316 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7mq\" (UniqueName: \"kubernetes.io/projected/a889afc5-6db0-4421-a04c-4ea08557d068-kube-api-access-8x7mq\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.357888 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-scripts\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.358094 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.358676 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-public-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.359100 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-internal-tls-certs\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.360238 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a889afc5-6db0-4421-a04c-4ea08557d068-config-data\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.375553 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7mq\" (UniqueName: \"kubernetes.io/projected/a889afc5-6db0-4421-a04c-4ea08557d068-kube-api-access-8x7mq\") pod \"aodh-0\" (UID: \"a889afc5-6db0-4421-a04c-4ea08557d068\") " pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.510835 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.785309 4741 generic.go:334] "Generic (PLEG): container finished" podID="e73f4159-15a0-40ca-b09a-903cb04c34d9" containerID="490a6673ad75bae2595734d627ab6e582b2e832cc94a5a80f4ab48ebcd6b5ef4" exitCode=0 Feb 26 08:46:24 crc kubenswrapper[4741]: I0226 08:46:24.785615 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" event={"ID":"e73f4159-15a0-40ca-b09a-903cb04c34d9","Type":"ContainerDied","Data":"490a6673ad75bae2595734d627ab6e582b2e832cc94a5a80f4ab48ebcd6b5ef4"} Feb 26 08:46:25 crc kubenswrapper[4741]: I0226 08:46:25.115675 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 08:46:25 crc kubenswrapper[4741]: W0226 08:46:25.134513 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda889afc5_6db0_4421_a04c_4ea08557d068.slice/crio-283ee3b89244bc4657b2c5c68fef7224dedcbec72cffac5ffa6e8d835a07c73b WatchSource:0}: Error finding container 283ee3b89244bc4657b2c5c68fef7224dedcbec72cffac5ffa6e8d835a07c73b: Status 404 returned error can't find the container with id 283ee3b89244bc4657b2c5c68fef7224dedcbec72cffac5ffa6e8d835a07c73b Feb 26 08:46:25 crc kubenswrapper[4741]: I0226 08:46:25.807571 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9d7088-dbf9-41ac-8e6d-2531330f8934" path="/var/lib/kubelet/pods/bc9d7088-dbf9-41ac-8e6d-2531330f8934/volumes" Feb 26 08:46:25 crc kubenswrapper[4741]: I0226 08:46:25.830415 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a889afc5-6db0-4421-a04c-4ea08557d068","Type":"ContainerStarted","Data":"558e1ddacd3a7a23fdb9f3431f15b87aba3dbf4e1a3711649154a123dd486191"} Feb 26 08:46:25 crc kubenswrapper[4741]: I0226 08:46:25.830491 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a889afc5-6db0-4421-a04c-4ea08557d068","Type":"ContainerStarted","Data":"283ee3b89244bc4657b2c5c68fef7224dedcbec72cffac5ffa6e8d835a07c73b"} Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.521233 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.639225 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory\") pod \"e73f4159-15a0-40ca-b09a-903cb04c34d9\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.639710 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam\") pod \"e73f4159-15a0-40ca-b09a-903cb04c34d9\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.639743 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q76gt\" (UniqueName: \"kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt\") pod \"e73f4159-15a0-40ca-b09a-903cb04c34d9\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.639815 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle\") pod \"e73f4159-15a0-40ca-b09a-903cb04c34d9\" (UID: \"e73f4159-15a0-40ca-b09a-903cb04c34d9\") " Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.645865 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt" (OuterVolumeSpecName: "kube-api-access-q76gt") pod "e73f4159-15a0-40ca-b09a-903cb04c34d9" (UID: "e73f4159-15a0-40ca-b09a-903cb04c34d9"). InnerVolumeSpecName "kube-api-access-q76gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.655283 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e73f4159-15a0-40ca-b09a-903cb04c34d9" (UID: "e73f4159-15a0-40ca-b09a-903cb04c34d9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.684058 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory" (OuterVolumeSpecName: "inventory") pod "e73f4159-15a0-40ca-b09a-903cb04c34d9" (UID: "e73f4159-15a0-40ca-b09a-903cb04c34d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.687860 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e73f4159-15a0-40ca-b09a-903cb04c34d9" (UID: "e73f4159-15a0-40ca-b09a-903cb04c34d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.743133 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.743179 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q76gt\" (UniqueName: \"kubernetes.io/projected/e73f4159-15a0-40ca-b09a-903cb04c34d9-kube-api-access-q76gt\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.743195 4741 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.743207 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e73f4159-15a0-40ca-b09a-903cb04c34d9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.851082 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" event={"ID":"e73f4159-15a0-40ca-b09a-903cb04c34d9","Type":"ContainerDied","Data":"e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca"} Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.851159 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56d5bda46db220bdb2dc6701c28be5750a21dd1600641608b1715e14cc338ca" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.851235 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.959301 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s"] Feb 26 08:46:26 crc kubenswrapper[4741]: E0226 08:46:26.960347 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73f4159-15a0-40ca-b09a-903cb04c34d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.960369 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73f4159-15a0-40ca-b09a-903cb04c34d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.960662 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73f4159-15a0-40ca-b09a-903cb04c34d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.961743 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.965826 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.966219 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.966469 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.966734 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:46:26 crc kubenswrapper[4741]: I0226 08:46:26.981453 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s"] Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.053941 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.054156 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.054306 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hmj\" (UniqueName: \"kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.158001 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hmj\" (UniqueName: \"kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.158135 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.158292 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.164859 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.177314 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.179372 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hmj\" (UniqueName: \"kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jl69s\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.300845 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.776497 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.900354 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a889afc5-6db0-4421-a04c-4ea08557d068","Type":"ContainerStarted","Data":"1c81c5564a5bd201f66743ce7483bf3b48e05ea2e322283287d8243832b1322f"} Feb 26 08:46:27 crc kubenswrapper[4741]: I0226 08:46:27.995687 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" containerID="cri-o://5845e157fcbb2c36269cfc9d78efb2946b8754682d3f6c5e9cee686f53cd9f11" gracePeriod=604795 Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.231038 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s"] Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.552473 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.555756 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.608889 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.613174 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.613924 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.613970 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.717546 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.717606 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.717739 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.718507 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.718611 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.743693 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw\") pod \"community-operators-n24wz\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.889944 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:28 crc kubenswrapper[4741]: I0226 08:46:28.922282 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" event={"ID":"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9","Type":"ContainerStarted","Data":"82a4caf83416f5dcede6777a075e4f894431ed5f9c9911fcd3582ee06eb950e7"} Feb 26 08:46:29 crc kubenswrapper[4741]: I0226 08:46:29.658535 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:46:29 crc kubenswrapper[4741]: W0226 08:46:29.675662 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0037aba1_80ac_48e4_baf7_c047e44c4e22.slice/crio-0be8cab21f1dff455ab1ee5e6d13039de5f6ec6d34111a48ec8190ce6595db14 WatchSource:0}: Error finding container 0be8cab21f1dff455ab1ee5e6d13039de5f6ec6d34111a48ec8190ce6595db14: Status 404 returned error can't find the container with id 0be8cab21f1dff455ab1ee5e6d13039de5f6ec6d34111a48ec8190ce6595db14 Feb 26 08:46:29 crc kubenswrapper[4741]: I0226 08:46:29.942599 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerStarted","Data":"0be8cab21f1dff455ab1ee5e6d13039de5f6ec6d34111a48ec8190ce6595db14"} Feb 26 08:46:29 crc kubenswrapper[4741]: I0226 08:46:29.946998 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" event={"ID":"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9","Type":"ContainerStarted","Data":"e5bfdd0c1733e96807b06b71f152eba18c1e3091787a180ef8048f831fe934db"} Feb 26 08:46:29 crc kubenswrapper[4741]: I0226 08:46:29.953965 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a889afc5-6db0-4421-a04c-4ea08557d068","Type":"ContainerStarted","Data":"ee6ffc06347e37b5eb3bcafc928f7a8d154fc75261b449fd6a0fd4f9b881e4a3"} Feb 26 08:46:29 crc kubenswrapper[4741]: I0226 08:46:29.992873 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" podStartSLOduration=2.834553113 podStartE2EDuration="3.992845567s" podCreationTimestamp="2026-02-26 08:46:26 +0000 UTC" firstStartedPulling="2026-02-26 08:46:28.241777052 +0000 UTC m=+2023.237714439" lastFinishedPulling="2026-02-26 08:46:29.400069506 +0000 UTC m=+2024.396006893" observedRunningTime="2026-02-26 08:46:29.978097317 +0000 UTC m=+2024.974034714" watchObservedRunningTime="2026-02-26 08:46:29.992845567 +0000 UTC m=+2024.988782954" Feb 26 08:46:30 crc kubenswrapper[4741]: I0226 08:46:30.969752 4741 generic.go:334] "Generic (PLEG): container finished" podID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerID="19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4" exitCode=0 Feb 26 08:46:30 crc kubenswrapper[4741]: I0226 08:46:30.969951 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerDied","Data":"19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4"} Feb 26 08:46:33 crc kubenswrapper[4741]: I0226 08:46:33.004114 4741 generic.go:334] "Generic (PLEG): container finished" podID="c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" containerID="e5bfdd0c1733e96807b06b71f152eba18c1e3091787a180ef8048f831fe934db" exitCode=0 Feb 26 08:46:33 crc kubenswrapper[4741]: I0226 08:46:33.004870 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" event={"ID":"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9","Type":"ContainerDied","Data":"e5bfdd0c1733e96807b06b71f152eba18c1e3091787a180ef8048f831fe934db"} Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.023379 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerStarted","Data":"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0"} Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.027820 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a889afc5-6db0-4421-a04c-4ea08557d068","Type":"ContainerStarted","Data":"26bf7a9b278b2932e524dc58f37cbb60129897063fcf53ec936e2d99e5902880"} Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.092621 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.716108061 podStartE2EDuration="10.092592053s" podCreationTimestamp="2026-02-26 08:46:24 +0000 UTC" firstStartedPulling="2026-02-26 08:46:25.140820705 +0000 UTC m=+2020.136758092" lastFinishedPulling="2026-02-26 08:46:32.517304677 +0000 UTC m=+2027.513242084" observedRunningTime="2026-02-26 08:46:34.071513033 +0000 UTC m=+2029.067450430" watchObservedRunningTime="2026-02-26 08:46:34.092592053 +0000 UTC m=+2029.088529440" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.715461 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.840343 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4hmj\" (UniqueName: \"kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj\") pod \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.840914 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory\") pod \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.841041 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam\") pod \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\" (UID: \"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9\") " Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.855551 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj" (OuterVolumeSpecName: "kube-api-access-x4hmj") pod "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" (UID: "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9"). InnerVolumeSpecName "kube-api-access-x4hmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.891224 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" (UID: "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.891325 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory" (OuterVolumeSpecName: "inventory") pod "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" (UID: "c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.944967 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4hmj\" (UniqueName: \"kubernetes.io/projected/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-kube-api-access-x4hmj\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.945014 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:34 crc kubenswrapper[4741]: I0226 08:46:34.945032 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.045935 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" event={"ID":"c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9","Type":"ContainerDied","Data":"82a4caf83416f5dcede6777a075e4f894431ed5f9c9911fcd3582ee06eb950e7"} Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.047786 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a4caf83416f5dcede6777a075e4f894431ed5f9c9911fcd3582ee06eb950e7" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.046494 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jl69s" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.051525 4741 generic.go:334] "Generic (PLEG): container finished" podID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerID="5845e157fcbb2c36269cfc9d78efb2946b8754682d3f6c5e9cee686f53cd9f11" exitCode=0 Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.052487 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerDied","Data":"5845e157fcbb2c36269cfc9d78efb2946b8754682d3f6c5e9cee686f53cd9f11"} Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.215292 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb"] Feb 26 08:46:35 crc kubenswrapper[4741]: E0226 08:46:35.216164 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.216189 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.216454 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.217588 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.220059 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.220309 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.220582 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.220978 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.233913 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb"] Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.357415 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bx5\" (UniqueName: \"kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.357474 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.357598 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.357705 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.461201 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bx5\" (UniqueName: \"kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.461254 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.461365 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.461877 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.469508 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.470833 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.479955 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.484623 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bx5\" (UniqueName: \"kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65twb\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.544812 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:46:35 crc kubenswrapper[4741]: I0226 08:46:35.966904 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.070038 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d20c309e-9b10-446d-a7f7-8aad2bdecfc9","Type":"ContainerDied","Data":"9c7a0e5e79e649be99fa0674cc554c94490e3e8dff9a81f5fed75f9b4979d27c"} Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.070127 4741 scope.go:117] "RemoveContainer" containerID="5845e157fcbb2c36269cfc9d78efb2946b8754682d3f6c5e9cee686f53cd9f11" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.070280 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091477 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091549 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091610 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091833 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091909 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.091949 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.092066 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.092124 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjrbt\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.092164 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.092195 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.106198 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info" (OuterVolumeSpecName: "pod-info") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.106769 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.107199 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.107614 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt" (OuterVolumeSpecName: "kube-api-access-kjrbt") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "kube-api-access-kjrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.109323 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.109684 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.109278 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.114188 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.123407 4741 scope.go:117] "RemoveContainer" containerID="10c025245d1919b81b16d3a4063d71f4e69aef927a2e25d87e38b1cb026aa792" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126812 4741 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126841 4741 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126877 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126895 4741 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126906 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126917 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjrbt\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-kube-api-access-kjrbt\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.126927 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.149272 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data" (OuterVolumeSpecName: "config-data") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.209311 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf" (OuterVolumeSpecName: "server-conf") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: E0226 08:46:36.231496 4741 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/vol_data.json]: open /var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\" (UID: \"d20c309e-9b10-446d-a7f7-8aad2bdecfc9\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/vol_data.json]: open /var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes/kubernetes.io~csi/pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7/vol_data.json: no such file or directory" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.233124 4741 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.233150 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.245300 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7" (OuterVolumeSpecName: "persistence") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.284490 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb"] Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.314441 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d20c309e-9b10-446d-a7f7-8aad2bdecfc9" (UID: "d20c309e-9b10-446d-a7f7-8aad2bdecfc9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.335472 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d20c309e-9b10-446d-a7f7-8aad2bdecfc9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.335801 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") on node \"crc\" " Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.481845 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.484222 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.484513 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7") on node "crc" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.521238 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.564001 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:36 crc kubenswrapper[4741]: E0226 08:46:36.564747 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="setup-container" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.564770 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="setup-container" Feb 26 08:46:36 crc kubenswrapper[4741]: E0226 08:46:36.564811 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.564818 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.565060 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" containerName="rabbitmq" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.567773 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.582329 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") on node \"crc\" DevicePath \"\"" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.593614 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685481 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-server-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685603 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685638 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685674 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb57\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-kube-api-access-pvb57\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685776 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685826 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685861 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685888 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.685937 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-pod-info\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.686009 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-config-data\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.686132 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788692 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788775 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788807 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788836 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788882 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-pod-info\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.788975 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-config-data\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.789062 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.789279 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-server-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.789283 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.789326 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.790136 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-config-data\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.790172 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.790228 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.790274 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvb57\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-kube-api-access-pvb57\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.790832 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.792426 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-server-conf\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.793960 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.798500 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-pod-info\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.798726 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.798746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.798952 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.798986 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/90a9f1f67587bf38d415b7fdc0210d07aee6e17315e9a5e9ad6c5d6b568aaaf6/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 26 08:46:36 crc kubenswrapper[4741]: I0226 08:46:36.829425 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvb57\" (UniqueName: \"kubernetes.io/projected/9f6d22be-0e7b-46b9-beff-4dacd2f8ee69-kube-api-access-pvb57\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:37 crc kubenswrapper[4741]: I0226 08:46:37.084216 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" event={"ID":"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e","Type":"ContainerStarted","Data":"609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045"} Feb 26 08:46:37 crc kubenswrapper[4741]: I0226 08:46:37.161306 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318fe81c-85ba-434d-892f-9acb6e8eeca7\") pod \"rabbitmq-server-1\" (UID: \"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69\") " pod="openstack/rabbitmq-server-1" Feb 26 08:46:37 crc kubenswrapper[4741]: I0226 08:46:37.220395 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 08:46:37 crc kubenswrapper[4741]: I0226 08:46:37.810264 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20c309e-9b10-446d-a7f7-8aad2bdecfc9" path="/var/lib/kubelet/pods/d20c309e-9b10-446d-a7f7-8aad2bdecfc9/volumes" Feb 26 08:46:37 crc kubenswrapper[4741]: I0226 08:46:37.818021 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 08:46:37 crc kubenswrapper[4741]: W0226 08:46:37.825223 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f6d22be_0e7b_46b9_beff_4dacd2f8ee69.slice/crio-5ee701fc432506e1ee140308556c2171849d8b30f4f37a731ef2fc1ee7abd9c0 WatchSource:0}: Error finding container 5ee701fc432506e1ee140308556c2171849d8b30f4f37a731ef2fc1ee7abd9c0: Status 404 returned error can't find the container with id 5ee701fc432506e1ee140308556c2171849d8b30f4f37a731ef2fc1ee7abd9c0 Feb 26 08:46:38 crc kubenswrapper[4741]: I0226 08:46:38.100889 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69","Type":"ContainerStarted","Data":"5ee701fc432506e1ee140308556c2171849d8b30f4f37a731ef2fc1ee7abd9c0"} Feb 26 08:46:40 crc kubenswrapper[4741]: I0226 08:46:40.132426 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69","Type":"ContainerStarted","Data":"31407cfb64445ca5d512c72d72a42e1f0e0bedb5311f01116d523720a74340dd"} Feb 26 08:46:41 crc kubenswrapper[4741]: I0226 08:46:41.163539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" event={"ID":"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e","Type":"ContainerStarted","Data":"481ecbc138e3d7c252a93d6d6db21acdb29f7ddfdc30b2dbbce664600aea9b8f"} Feb 26 08:46:41 crc kubenswrapper[4741]: I0226 08:46:41.225562 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" podStartSLOduration=2.048716454 podStartE2EDuration="6.225535477s" podCreationTimestamp="2026-02-26 08:46:35 +0000 UTC" firstStartedPulling="2026-02-26 08:46:36.280966068 +0000 UTC m=+2031.276903445" lastFinishedPulling="2026-02-26 08:46:40.457785081 +0000 UTC m=+2035.453722468" observedRunningTime="2026-02-26 08:46:41.211399424 +0000 UTC m=+2036.207336821" watchObservedRunningTime="2026-02-26 08:46:41.225535477 +0000 UTC m=+2036.221472864" Feb 26 08:46:42 crc kubenswrapper[4741]: I0226 08:46:42.181136 4741 generic.go:334] "Generic (PLEG): container finished" podID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerID="10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0" exitCode=0 Feb 26 08:46:42 crc kubenswrapper[4741]: I0226 08:46:42.181241 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerDied","Data":"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0"} Feb 26 08:46:43 crc kubenswrapper[4741]: I0226 08:46:43.896976 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-fzxmn" podUID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerName="registry-server" probeResult="failure" output=< Feb 26 08:46:43 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:46:43 crc kubenswrapper[4741]: > Feb 26 08:46:43 crc kubenswrapper[4741]: I0226 08:46:43.908765 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fzxmn" podUID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerName="registry-server" probeResult="failure" output=< Feb 26 08:46:43 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:46:43 crc kubenswrapper[4741]: > Feb 26 08:46:44 crc kubenswrapper[4741]: I0226 08:46:44.210419 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerStarted","Data":"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb"} Feb 26 08:46:45 crc kubenswrapper[4741]: I0226 08:46:45.253470 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n24wz" podStartSLOduration=5.32697725 podStartE2EDuration="17.253447007s" podCreationTimestamp="2026-02-26 08:46:28 +0000 UTC" firstStartedPulling="2026-02-26 08:46:31.471610423 +0000 UTC m=+2026.467547800" lastFinishedPulling="2026-02-26 08:46:43.39808017 +0000 UTC m=+2038.394017557" observedRunningTime="2026-02-26 08:46:45.249033151 +0000 UTC m=+2040.244970538" watchObservedRunningTime="2026-02-26 08:46:45.253447007 +0000 UTC m=+2040.249384404" Feb 26 08:46:48 crc kubenswrapper[4741]: I0226 08:46:48.891204 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:48 crc kubenswrapper[4741]: I0226 08:46:48.891934 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:49 crc kubenswrapper[4741]: I0226 08:46:49.956965 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n24wz" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="registry-server" probeResult="failure" output=< Feb 26 08:46:49 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:46:49 crc kubenswrapper[4741]: > Feb 26 08:46:56 crc kubenswrapper[4741]: I0226 08:46:56.743349 4741 scope.go:117] "RemoveContainer" containerID="dee6721ce555d47c9b3fba111eb6bdd064055f42da703224071222561104af0f" Feb 26 08:46:56 crc kubenswrapper[4741]: I0226 08:46:56.772098 4741 scope.go:117] "RemoveContainer" containerID="e271dbba8ceafff7070556a9155960d7e34b9f2b3a174e19e71ea3e64b902b55" Feb 26 08:46:56 crc kubenswrapper[4741]: I0226 08:46:56.855046 4741 scope.go:117] "RemoveContainer" containerID="35253af14222cce888452ee7920ee9362499af02caf6a760b00eb84b8256a002" Feb 26 08:46:56 crc kubenswrapper[4741]: I0226 08:46:56.948796 4741 scope.go:117] "RemoveContainer" containerID="725526e53136da120a04d3bd2776ddbc05e51d9c0b6eb2b02aa75da41e26bf30" Feb 26 08:46:57 crc kubenswrapper[4741]: I0226 08:46:57.007084 4741 scope.go:117] "RemoveContainer" containerID="bef51badf4218dc2572110d55bf70f30af64faae6455c98c4081a4e0906723b5" Feb 26 08:46:58 crc kubenswrapper[4741]: I0226 08:46:58.951735 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:59 crc kubenswrapper[4741]: I0226 08:46:59.016467 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:46:59 crc kubenswrapper[4741]: I0226 08:46:59.756621 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:47:00 crc kubenswrapper[4741]: I0226 08:47:00.454551 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n24wz" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="registry-server" containerID="cri-o://0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb" gracePeriod=2 Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.115621 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.208297 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content\") pod \"0037aba1-80ac-48e4-baf7-c047e44c4e22\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.208359 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw\") pod \"0037aba1-80ac-48e4-baf7-c047e44c4e22\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.208392 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities\") pod \"0037aba1-80ac-48e4-baf7-c047e44c4e22\" (UID: \"0037aba1-80ac-48e4-baf7-c047e44c4e22\") " Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.212534 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities" (OuterVolumeSpecName: "utilities") pod "0037aba1-80ac-48e4-baf7-c047e44c4e22" (UID: "0037aba1-80ac-48e4-baf7-c047e44c4e22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.219298 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw" (OuterVolumeSpecName: "kube-api-access-mpwmw") pod "0037aba1-80ac-48e4-baf7-c047e44c4e22" (UID: "0037aba1-80ac-48e4-baf7-c047e44c4e22"). InnerVolumeSpecName "kube-api-access-mpwmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.261601 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0037aba1-80ac-48e4-baf7-c047e44c4e22" (UID: "0037aba1-80ac-48e4-baf7-c047e44c4e22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.314843 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.315344 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0037aba1-80ac-48e4-baf7-c047e44c4e22-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.315466 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0037aba1-80ac-48e4-baf7-c047e44c4e22-kube-api-access-mpwmw\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.470937 4741 generic.go:334] "Generic (PLEG): container finished" podID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerID="0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb" exitCode=0 Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.470991 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerDied","Data":"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb"} Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.471027 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n24wz" event={"ID":"0037aba1-80ac-48e4-baf7-c047e44c4e22","Type":"ContainerDied","Data":"0be8cab21f1dff455ab1ee5e6d13039de5f6ec6d34111a48ec8190ce6595db14"} Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.471046 4741 scope.go:117] "RemoveContainer" containerID="0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.471267 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n24wz" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.508274 4741 scope.go:117] "RemoveContainer" containerID="10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.522615 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.539588 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n24wz"] Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.544141 4741 scope.go:117] "RemoveContainer" containerID="19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.599726 4741 scope.go:117] "RemoveContainer" containerID="0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb" Feb 26 08:47:01 crc kubenswrapper[4741]: E0226 08:47:01.600554 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb\": container with ID starting with 0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb not found: ID does not exist" containerID="0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.600624 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb"} err="failed to get container status \"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb\": rpc error: code = NotFound desc = could not find container \"0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb\": container with ID starting with 0b9130b024faa02b4573c4f67232bf223590481e6dc2734ecc219f15b13161fb not found: ID does not exist" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.600664 4741 scope.go:117] "RemoveContainer" containerID="10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0" Feb 26 08:47:01 crc kubenswrapper[4741]: E0226 08:47:01.601593 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0\": container with ID starting with 10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0 not found: ID does not exist" containerID="10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.601637 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0"} err="failed to get container status \"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0\": rpc error: code = NotFound desc = could not find container \"10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0\": container with ID starting with 10bfa7954c508a4026e647c5c79e03b8addf4010b93cdc4c040492edf6675cd0 not found: ID does not exist" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.601660 4741 scope.go:117] "RemoveContainer" containerID="19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4" Feb 26 08:47:01 crc kubenswrapper[4741]: E0226 08:47:01.601959 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4\": container with ID starting with 19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4 not found: ID does not exist" containerID="19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.601987 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4"} err="failed to get container status \"19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4\": rpc error: code = NotFound desc = could not find container \"19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4\": container with ID starting with 19a665b0f2ebd974cab93a243a29d57c18d56b77771a665471db3b30c5ee29a4 not found: ID does not exist" Feb 26 08:47:01 crc kubenswrapper[4741]: I0226 08:47:01.802779 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" path="/var/lib/kubelet/pods/0037aba1-80ac-48e4-baf7-c047e44c4e22/volumes" Feb 26 08:47:12 crc kubenswrapper[4741]: I0226 08:47:12.648577 4741 generic.go:334] "Generic (PLEG): container finished" podID="9f6d22be-0e7b-46b9-beff-4dacd2f8ee69" containerID="31407cfb64445ca5d512c72d72a42e1f0e0bedb5311f01116d523720a74340dd" exitCode=0 Feb 26 08:47:12 crc kubenswrapper[4741]: I0226 08:47:12.648714 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69","Type":"ContainerDied","Data":"31407cfb64445ca5d512c72d72a42e1f0e0bedb5311f01116d523720a74340dd"} Feb 26 08:47:13 crc kubenswrapper[4741]: I0226 08:47:13.667405 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"9f6d22be-0e7b-46b9-beff-4dacd2f8ee69","Type":"ContainerStarted","Data":"5cb1ead1de013d0fe955e2077d65a9898af7ebf1c10f9a1c4071eb4b8ef0d8db"} Feb 26 08:47:13 crc kubenswrapper[4741]: I0226 08:47:13.668382 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 26 08:47:13 crc kubenswrapper[4741]: I0226 08:47:13.712471 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.712445096 podStartE2EDuration="37.712445096s" podCreationTimestamp="2026-02-26 08:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:47:13.699381304 +0000 UTC m=+2068.695318701" watchObservedRunningTime="2026-02-26 08:47:13.712445096 +0000 UTC m=+2068.708382483" Feb 26 08:47:27 crc kubenswrapper[4741]: I0226 08:47:27.223571 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 26 08:47:27 crc kubenswrapper[4741]: I0226 08:47:27.296581 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:32 crc kubenswrapper[4741]: I0226 08:47:32.382714 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" containerID="cri-o://4031d6b7bcb81112eca030c5fd4613b59b39e890138fdd3df822f24f13b1f901" gracePeriod=604795 Feb 26 08:47:37 crc kubenswrapper[4741]: I0226 08:47:37.670230 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.046288 4741 generic.go:334] "Generic (PLEG): container finished" podID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerID="4031d6b7bcb81112eca030c5fd4613b59b39e890138fdd3df822f24f13b1f901" exitCode=0 Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.046364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerDied","Data":"4031d6b7bcb81112eca030c5fd4613b59b39e890138fdd3df822f24f13b1f901"} Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.046819 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26969fe6-2bb9-4f23-8c49-d9d359763da3","Type":"ContainerDied","Data":"740f6a48d4941d4ec2f62286a914924f22f284d7fb520b8188a19b4f456cdc74"} Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.046843 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740f6a48d4941d4ec2f62286a914924f22f284d7fb520b8188a19b4f456cdc74" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.153736 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.280955 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.280996 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.281274 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.281345 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7ks\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.281376 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.281401 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282649 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282718 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282806 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282845 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282862 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282899 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.282929 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls\") pod \"26969fe6-2bb9-4f23-8c49-d9d359763da3\" (UID: \"26969fe6-2bb9-4f23-8c49-d9d359763da3\") " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.283205 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.283839 4741 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.283858 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.283868 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.291325 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.302647 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.310364 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks" (OuterVolumeSpecName: "kube-api-access-cv7ks") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "kube-api-access-cv7ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.310418 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info" (OuterVolumeSpecName: "pod-info") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.328626 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061" (OuterVolumeSpecName: "persistence") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.369451 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data" (OuterVolumeSpecName: "config-data") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.387136 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf" (OuterVolumeSpecName: "server-conf") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388597 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv7ks\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-kube-api-access-cv7ks\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388621 4741 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26969fe6-2bb9-4f23-8c49-d9d359763da3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388656 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") on node \"crc\" " Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388669 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388678 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388688 4741 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26969fe6-2bb9-4f23-8c49-d9d359763da3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.388698 4741 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26969fe6-2bb9-4f23-8c49-d9d359763da3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.451189 4741 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.451375 4741 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061") on node "crc" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.493675 4741 reconciler_common.go:293] "Volume detached for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.518360 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "26969fe6-2bb9-4f23-8c49-d9d359763da3" (UID: "26969fe6-2bb9-4f23-8c49-d9d359763da3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:47:39 crc kubenswrapper[4741]: I0226 08:47:39.597516 4741 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26969fe6-2bb9-4f23-8c49-d9d359763da3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.058894 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.092738 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.114201 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.133803 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:40 crc kubenswrapper[4741]: E0226 08:47:40.134515 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="setup-container" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134534 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="setup-container" Feb 26 08:47:40 crc kubenswrapper[4741]: E0226 08:47:40.134556 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="extract-content" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134562 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="extract-content" Feb 26 08:47:40 crc kubenswrapper[4741]: E0226 08:47:40.134589 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="extract-utilities" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134596 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="extract-utilities" Feb 26 08:47:40 crc kubenswrapper[4741]: E0226 08:47:40.134615 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="registry-server" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134621 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="registry-server" Feb 26 08:47:40 crc kubenswrapper[4741]: E0226 08:47:40.134635 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134640 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134885 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0037aba1-80ac-48e4-baf7-c047e44c4e22" containerName="registry-server" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.134913 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" containerName="rabbitmq" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.137058 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.175192 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221036 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221100 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221152 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fkg\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-kube-api-access-s8fkg\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221270 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221315 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221338 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221362 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221411 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221433 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221460 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.221544 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.323997 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324075 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324133 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fkg\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-kube-api-access-s8fkg\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324199 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324266 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324294 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324609 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324693 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324762 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.324944 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.325133 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.325499 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.325871 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.326471 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.326859 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.327006 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-config-data\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.330586 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.330656 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.330985 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.332598 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.334850 4741 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.334898 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57beecbda7054f70039ef944bb56736e90d719c0f9e55f6bbb987ff859fc9f8b/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.356429 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fkg\" (UniqueName: \"kubernetes.io/projected/fa1ea6e3-fc0a-4e77-b384-1e8629a4707f-kube-api-access-s8fkg\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.416144 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e7d7729-d8db-4b88-ac1c-72fc6eed6061\") pod \"rabbitmq-server-0\" (UID: \"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f\") " pod="openstack/rabbitmq-server-0" Feb 26 08:47:40 crc kubenswrapper[4741]: I0226 08:47:40.462885 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 08:47:41 crc kubenswrapper[4741]: W0226 08:47:41.100946 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa1ea6e3_fc0a_4e77_b384_1e8629a4707f.slice/crio-44ac20321a50ccb69da1a97ca515e254df8e3ca20a0b5c5ffda2b4300ebe4e5c WatchSource:0}: Error finding container 44ac20321a50ccb69da1a97ca515e254df8e3ca20a0b5c5ffda2b4300ebe4e5c: Status 404 returned error can't find the container with id 44ac20321a50ccb69da1a97ca515e254df8e3ca20a0b5c5ffda2b4300ebe4e5c Feb 26 08:47:41 crc kubenswrapper[4741]: I0226 08:47:41.103646 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 08:47:41 crc kubenswrapper[4741]: I0226 08:47:41.805583 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26969fe6-2bb9-4f23-8c49-d9d359763da3" path="/var/lib/kubelet/pods/26969fe6-2bb9-4f23-8c49-d9d359763da3/volumes" Feb 26 08:47:42 crc kubenswrapper[4741]: I0226 08:47:42.086596 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f","Type":"ContainerStarted","Data":"44ac20321a50ccb69da1a97ca515e254df8e3ca20a0b5c5ffda2b4300ebe4e5c"} Feb 26 08:47:44 crc kubenswrapper[4741]: I0226 08:47:44.114326 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f","Type":"ContainerStarted","Data":"ab82bb9b474a20f2091f4cc7254cb8701f41a8e6972c0820692cbfd03695e7d7"} Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.273145 4741 scope.go:117] "RemoveContainer" containerID="b9e2fa9f80a4da777f9e40e1c746995674137584845b0445b9150fbaa83ecc3e" Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.306768 4741 scope.go:117] "RemoveContainer" containerID="8c616e107c839e4915f447057601903ce753a5701dce65084e24fa53495807e7" Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.344022 4741 scope.go:117] "RemoveContainer" containerID="961512aa86f71fcb7f86996329bae4343404592743d22cc136c4d6d99f257a2e" Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.426375 4741 scope.go:117] "RemoveContainer" containerID="2b42200b74623c8541be3a474ac586277c03eee8a9333693733c5cb5f9f47a7c" Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.474047 4741 scope.go:117] "RemoveContainer" containerID="4c3005f273974cd0fdeed189a0c8bf7f50189fe7415c19858716cc343d5225b7" Feb 26 08:47:57 crc kubenswrapper[4741]: I0226 08:47:57.499342 4741 scope.go:117] "RemoveContainer" containerID="4031d6b7bcb81112eca030c5fd4613b59b39e890138fdd3df822f24f13b1f901" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.158284 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534928-75cn2"] Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.160953 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.164476 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.164578 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.167477 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.173733 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534928-75cn2"] Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.278031 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wn8\" (UniqueName: \"kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8\") pod \"auto-csr-approver-29534928-75cn2\" (UID: \"4a5a8c99-3134-4171-94e9-666b9c5ca8a7\") " pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.381071 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wn8\" (UniqueName: \"kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8\") pod \"auto-csr-approver-29534928-75cn2\" (UID: \"4a5a8c99-3134-4171-94e9-666b9c5ca8a7\") " pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.401384 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wn8\" (UniqueName: \"kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8\") pod \"auto-csr-approver-29534928-75cn2\" (UID: \"4a5a8c99-3134-4171-94e9-666b9c5ca8a7\") " pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:00 crc kubenswrapper[4741]: I0226 08:48:00.487431 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:01 crc kubenswrapper[4741]: I0226 08:48:01.078528 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:48:01 crc kubenswrapper[4741]: I0226 08:48:01.080710 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534928-75cn2"] Feb 26 08:48:01 crc kubenswrapper[4741]: I0226 08:48:01.380510 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534928-75cn2" event={"ID":"4a5a8c99-3134-4171-94e9-666b9c5ca8a7","Type":"ContainerStarted","Data":"acbbe1a03eff954c5d891891e20c6e63fc19a496f3a764fbd896a31d80479f9c"} Feb 26 08:48:03 crc kubenswrapper[4741]: I0226 08:48:03.414547 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534928-75cn2" event={"ID":"4a5a8c99-3134-4171-94e9-666b9c5ca8a7","Type":"ContainerStarted","Data":"341aa065ea73c9d6ed89e30521984da4287e2fe7f85601afe6b583b013713dcf"} Feb 26 08:48:04 crc kubenswrapper[4741]: I0226 08:48:04.430721 4741 generic.go:334] "Generic (PLEG): container finished" podID="4a5a8c99-3134-4171-94e9-666b9c5ca8a7" containerID="341aa065ea73c9d6ed89e30521984da4287e2fe7f85601afe6b583b013713dcf" exitCode=0 Feb 26 08:48:04 crc kubenswrapper[4741]: I0226 08:48:04.431160 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534928-75cn2" event={"ID":"4a5a8c99-3134-4171-94e9-666b9c5ca8a7","Type":"ContainerDied","Data":"341aa065ea73c9d6ed89e30521984da4287e2fe7f85601afe6b583b013713dcf"} Feb 26 08:48:04 crc kubenswrapper[4741]: I0226 08:48:04.872000 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:04 crc kubenswrapper[4741]: I0226 08:48:04.970851 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85wn8\" (UniqueName: \"kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8\") pod \"4a5a8c99-3134-4171-94e9-666b9c5ca8a7\" (UID: \"4a5a8c99-3134-4171-94e9-666b9c5ca8a7\") " Feb 26 08:48:04 crc kubenswrapper[4741]: I0226 08:48:04.976185 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8" (OuterVolumeSpecName: "kube-api-access-85wn8") pod "4a5a8c99-3134-4171-94e9-666b9c5ca8a7" (UID: "4a5a8c99-3134-4171-94e9-666b9c5ca8a7"). InnerVolumeSpecName "kube-api-access-85wn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.075393 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85wn8\" (UniqueName: \"kubernetes.io/projected/4a5a8c99-3134-4171-94e9-666b9c5ca8a7-kube-api-access-85wn8\") on node \"crc\" DevicePath \"\"" Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.446572 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534928-75cn2" event={"ID":"4a5a8c99-3134-4171-94e9-666b9c5ca8a7","Type":"ContainerDied","Data":"acbbe1a03eff954c5d891891e20c6e63fc19a496f3a764fbd896a31d80479f9c"} Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.446929 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acbbe1a03eff954c5d891891e20c6e63fc19a496f3a764fbd896a31d80479f9c" Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.446612 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534928-75cn2" Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.962596 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534922-4g5sp"] Feb 26 08:48:05 crc kubenswrapper[4741]: I0226 08:48:05.976983 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534922-4g5sp"] Feb 26 08:48:07 crc kubenswrapper[4741]: I0226 08:48:07.912360 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75" path="/var/lib/kubelet/pods/0e7c69ce-d37d-4725-bb2c-d4a7dd56ca75/volumes" Feb 26 08:48:16 crc kubenswrapper[4741]: I0226 08:48:16.615165 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa1ea6e3-fc0a-4e77-b384-1e8629a4707f" containerID="ab82bb9b474a20f2091f4cc7254cb8701f41a8e6972c0820692cbfd03695e7d7" exitCode=0 Feb 26 08:48:16 crc kubenswrapper[4741]: I0226 08:48:16.615250 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f","Type":"ContainerDied","Data":"ab82bb9b474a20f2091f4cc7254cb8701f41a8e6972c0820692cbfd03695e7d7"} Feb 26 08:48:17 crc kubenswrapper[4741]: I0226 08:48:17.632726 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fa1ea6e3-fc0a-4e77-b384-1e8629a4707f","Type":"ContainerStarted","Data":"ec93d4807a741117f5c42fb7be6eeb2c8f47113eaa7d337694566fee51ce2ed4"} Feb 26 08:48:17 crc kubenswrapper[4741]: I0226 08:48:17.633860 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 08:48:17 crc kubenswrapper[4741]: I0226 08:48:17.669944 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.66992129 podStartE2EDuration="37.66992129s" podCreationTimestamp="2026-02-26 08:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 08:48:17.659333538 +0000 UTC m=+2132.655270945" watchObservedRunningTime="2026-02-26 08:48:17.66992129 +0000 UTC m=+2132.665858677" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.294364 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:48:20 crc kubenswrapper[4741]: E0226 08:48:20.295647 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5a8c99-3134-4171-94e9-666b9c5ca8a7" containerName="oc" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.295677 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5a8c99-3134-4171-94e9-666b9c5ca8a7" containerName="oc" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.296167 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5a8c99-3134-4171-94e9-666b9c5ca8a7" containerName="oc" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.299236 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.316347 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.398261 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.399344 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.399421 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2n2\" (UniqueName: \"kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.502894 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.503375 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.503409 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2n2\" (UniqueName: \"kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.503586 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.503883 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.528383 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2n2\" (UniqueName: \"kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2\") pod \"redhat-operators-lpbhq\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:20 crc kubenswrapper[4741]: I0226 08:48:20.627210 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:21 crc kubenswrapper[4741]: I0226 08:48:21.336967 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:48:21 crc kubenswrapper[4741]: I0226 08:48:21.687233 4741 generic.go:334] "Generic (PLEG): container finished" podID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerID="0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed" exitCode=0 Feb 26 08:48:21 crc kubenswrapper[4741]: I0226 08:48:21.687362 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerDied","Data":"0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed"} Feb 26 08:48:21 crc kubenswrapper[4741]: I0226 08:48:21.687713 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerStarted","Data":"18df9f80e47091223894d6658a8571957fa45447a668ebefc610b187b32b823f"} Feb 26 08:48:23 crc kubenswrapper[4741]: I0226 08:48:23.723915 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerStarted","Data":"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60"} Feb 26 08:48:25 crc kubenswrapper[4741]: I0226 08:48:25.148983 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:48:25 crc kubenswrapper[4741]: I0226 08:48:25.149072 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:48:29 crc kubenswrapper[4741]: I0226 08:48:29.813239 4741 generic.go:334] "Generic (PLEG): container finished" podID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerID="12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60" exitCode=0 Feb 26 08:48:29 crc kubenswrapper[4741]: I0226 08:48:29.813742 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerDied","Data":"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60"} Feb 26 08:48:30 crc kubenswrapper[4741]: I0226 08:48:30.466508 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 08:48:30 crc kubenswrapper[4741]: I0226 08:48:30.834297 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerStarted","Data":"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071"} Feb 26 08:48:30 crc kubenswrapper[4741]: I0226 08:48:30.875958 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpbhq" podStartSLOduration=2.336953178 podStartE2EDuration="10.875927358s" podCreationTimestamp="2026-02-26 08:48:20 +0000 UTC" firstStartedPulling="2026-02-26 08:48:21.689640104 +0000 UTC m=+2136.685577491" lastFinishedPulling="2026-02-26 08:48:30.228614284 +0000 UTC m=+2145.224551671" observedRunningTime="2026-02-26 08:48:30.8556445 +0000 UTC m=+2145.851581887" watchObservedRunningTime="2026-02-26 08:48:30.875927358 +0000 UTC m=+2145.871864745" Feb 26 08:48:40 crc kubenswrapper[4741]: I0226 08:48:40.627835 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:40 crc kubenswrapper[4741]: I0226 08:48:40.628555 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:48:41 crc kubenswrapper[4741]: I0226 08:48:41.683883 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbhq" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" probeResult="failure" output=< Feb 26 08:48:41 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:48:41 crc kubenswrapper[4741]: > Feb 26 08:48:51 crc kubenswrapper[4741]: I0226 08:48:51.693969 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbhq" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" probeResult="failure" output=< Feb 26 08:48:51 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:48:51 crc kubenswrapper[4741]: > Feb 26 08:48:55 crc kubenswrapper[4741]: I0226 08:48:55.148933 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:48:55 crc kubenswrapper[4741]: I0226 08:48:55.149664 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:48:57 crc kubenswrapper[4741]: I0226 08:48:57.611338 4741 scope.go:117] "RemoveContainer" containerID="4ba1cecc1f47aa6cb49aa2e08ef9178d0be835fc899c87cc09188bcea039bdad" Feb 26 08:48:57 crc kubenswrapper[4741]: I0226 08:48:57.644618 4741 scope.go:117] "RemoveContainer" containerID="ae413794ff8c88e06a2bde687aafdc01070a102ce9c070115daa7ebaedb02330" Feb 26 08:48:57 crc kubenswrapper[4741]: I0226 08:48:57.735757 4741 scope.go:117] "RemoveContainer" containerID="78181bc545e58de3c5e121c4b65279bf6cafd4e8b27f0c1a6c91d63b3a7c161c" Feb 26 08:49:01 crc kubenswrapper[4741]: I0226 08:49:01.705088 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbhq" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" probeResult="failure" output=< Feb 26 08:49:01 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:49:01 crc kubenswrapper[4741]: > Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.071499 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-zq6r6"] Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.118513 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jgvpc"] Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.132282 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-zq6r6"] Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.155476 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jgvpc"] Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.862147 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d5b3c3-ad6b-4a93-86f5-842d24d6c20b" path="/var/lib/kubelet/pods/24d5b3c3-ad6b-4a93-86f5-842d24d6c20b/volumes" Feb 26 08:49:03 crc kubenswrapper[4741]: I0226 08:49:03.863910 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5e2451-f816-4a9a-a18d-806eb3f5cf79" path="/var/lib/kubelet/pods/ce5e2451-f816-4a9a-a18d-806eb3f5cf79/volumes" Feb 26 08:49:04 crc kubenswrapper[4741]: I0226 08:49:04.068619 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2dff-account-create-update-nxlxz"] Feb 26 08:49:04 crc kubenswrapper[4741]: I0226 08:49:04.083645 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v24ql"] Feb 26 08:49:04 crc kubenswrapper[4741]: I0226 08:49:04.099620 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2dff-account-create-update-nxlxz"] Feb 26 08:49:04 crc kubenswrapper[4741]: I0226 08:49:04.114037 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v24ql"] Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.046440 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-abee-account-create-update-jg7s7"] Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.064528 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-07e6-account-create-update-qwfjj"] Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.078472 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-abee-account-create-update-jg7s7"] Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.092900 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-07e6-account-create-update-qwfjj"] Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.814735 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f305d3-533d-46ac-9db3-fd55e864eb83" path="/var/lib/kubelet/pods/04f305d3-533d-46ac-9db3-fd55e864eb83/volumes" Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.816623 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c980da-2e8f-4979-b5a8-760039e24ea8" path="/var/lib/kubelet/pods/85c980da-2e8f-4979-b5a8-760039e24ea8/volumes" Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.818364 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00378f6-b3d9-40f7-889c-a6cce27718c4" path="/var/lib/kubelet/pods/e00378f6-b3d9-40f7-889c-a6cce27718c4/volumes" Feb 26 08:49:05 crc kubenswrapper[4741]: I0226 08:49:05.820160 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeddba64-b6cc-4ae0-8c09-8252931e1778" path="/var/lib/kubelet/pods/eeddba64-b6cc-4ae0-8c09-8252931e1778/volumes" Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.039933 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6rxbw"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.057200 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-t7rkz"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.074471 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-26f9-account-create-update-dsrjk"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.092055 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-t7rkz"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.111038 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6rxbw"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.132852 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-26f9-account-create-update-dsrjk"] Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.812475 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ea5eb4-4f56-417f-84fd-5ae940e74516" path="/var/lib/kubelet/pods/10ea5eb4-4f56-417f-84fd-5ae940e74516/volumes" Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.826260 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa2ef24-5972-42d6-b38e-adaef893b130" path="/var/lib/kubelet/pods/1aa2ef24-5972-42d6-b38e-adaef893b130/volumes" Feb 26 08:49:07 crc kubenswrapper[4741]: I0226 08:49:07.829702 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93463751-5c16-4d33-abad-392b566eef58" path="/var/lib/kubelet/pods/93463751-5c16-4d33-abad-392b566eef58/volumes" Feb 26 08:49:10 crc kubenswrapper[4741]: I0226 08:49:10.696490 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:49:10 crc kubenswrapper[4741]: I0226 08:49:10.758263 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:49:10 crc kubenswrapper[4741]: I0226 08:49:10.947689 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:49:12 crc kubenswrapper[4741]: I0226 08:49:12.458941 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpbhq" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" containerID="cri-o://dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071" gracePeriod=2 Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.128187 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.220043 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj2n2\" (UniqueName: \"kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2\") pod \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.220181 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities\") pod \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.220217 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content\") pod \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\" (UID: \"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2\") " Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.221125 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities" (OuterVolumeSpecName: "utilities") pod "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" (UID: "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.222342 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.226894 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2" (OuterVolumeSpecName: "kube-api-access-nj2n2") pod "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" (UID: "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2"). InnerVolumeSpecName "kube-api-access-nj2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.325058 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj2n2\" (UniqueName: \"kubernetes.io/projected/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-kube-api-access-nj2n2\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.364145 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" (UID: "297cd5a0-82a7-48d3-b7ce-8ef4f70530d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.428230 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.474278 4741 generic.go:334] "Generic (PLEG): container finished" podID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerID="dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071" exitCode=0 Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.475283 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerDied","Data":"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071"} Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.475364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbhq" event={"ID":"297cd5a0-82a7-48d3-b7ce-8ef4f70530d2","Type":"ContainerDied","Data":"18df9f80e47091223894d6658a8571957fa45447a668ebefc610b187b32b823f"} Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.475388 4741 scope.go:117] "RemoveContainer" containerID="dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.475711 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbhq" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.515137 4741 scope.go:117] "RemoveContainer" containerID="12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.525017 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.538160 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lpbhq"] Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.563792 4741 scope.go:117] "RemoveContainer" containerID="0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.624187 4741 scope.go:117] "RemoveContainer" containerID="dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071" Feb 26 08:49:13 crc kubenswrapper[4741]: E0226 08:49:13.626756 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071\": container with ID starting with dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071 not found: ID does not exist" containerID="dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.626828 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071"} err="failed to get container status \"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071\": rpc error: code = NotFound desc = could not find container \"dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071\": container with ID starting with dedea2da0bad210ac0002e33b7fed783ebbda8b90b03287eaf420caaa3d0a071 not found: ID does not exist" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.626866 4741 scope.go:117] "RemoveContainer" containerID="12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60" Feb 26 08:49:13 crc kubenswrapper[4741]: E0226 08:49:13.631692 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60\": container with ID starting with 12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60 not found: ID does not exist" containerID="12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.631755 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60"} err="failed to get container status \"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60\": rpc error: code = NotFound desc = could not find container \"12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60\": container with ID starting with 12d4659bec0e470dc994caed0b214d8eac9a26d0a8dc3e1059bc4dfa92666d60 not found: ID does not exist" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.631810 4741 scope.go:117] "RemoveContainer" containerID="0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed" Feb 26 08:49:13 crc kubenswrapper[4741]: E0226 08:49:13.632387 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed\": container with ID starting with 0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed not found: ID does not exist" containerID="0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.632432 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed"} err="failed to get container status \"0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed\": rpc error: code = NotFound desc = could not find container \"0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed\": container with ID starting with 0bc7944d41db0d9ec71c70d111008fc2412fd90bbae2f193926672641e444bed not found: ID does not exist" Feb 26 08:49:13 crc kubenswrapper[4741]: I0226 08:49:13.807088 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" path="/var/lib/kubelet/pods/297cd5a0-82a7-48d3-b7ce-8ef4f70530d2/volumes" Feb 26 08:49:14 crc kubenswrapper[4741]: I0226 08:49:14.041033 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-be7b-account-create-update-v5l4z"] Feb 26 08:49:14 crc kubenswrapper[4741]: I0226 08:49:14.056643 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7"] Feb 26 08:49:14 crc kubenswrapper[4741]: I0226 08:49:14.071463 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gmjx7"] Feb 26 08:49:14 crc kubenswrapper[4741]: I0226 08:49:14.086358 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-be7b-account-create-update-v5l4z"] Feb 26 08:49:15 crc kubenswrapper[4741]: I0226 08:49:15.839391 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c52b481-9036-4a56-a248-30b506dd1bea" path="/var/lib/kubelet/pods/9c52b481-9036-4a56-a248-30b506dd1bea/volumes" Feb 26 08:49:15 crc kubenswrapper[4741]: I0226 08:49:15.841708 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d613263d-cc5a-4d4b-8327-cb8a3faec8a7" path="/var/lib/kubelet/pods/d613263d-cc5a-4d4b-8327-cb8a3faec8a7/volumes" Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.045810 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v7nc2"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.075467 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v7nc2"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.097092 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7114-account-create-update-jdvp4"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.112473 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37f8-account-create-update-pmrwl"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.125367 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-833b-account-create-update-nq9n4"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.138467 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7114-account-create-update-jdvp4"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.152799 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-37f8-account-create-update-pmrwl"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.165811 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-833b-account-create-update-nq9n4"] Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.802885 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76abc98c-8108-43d6-b219-d8a228ee9de1" path="/var/lib/kubelet/pods/76abc98c-8108-43d6-b219-d8a228ee9de1/volumes" Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.840468 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16704f2-d6ef-4c31-b3a8-533129c97ec2" path="/var/lib/kubelet/pods/b16704f2-d6ef-4c31-b3a8-533129c97ec2/volumes" Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.858103 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf167f51-9893-44e5-99f4-9841055d2e1b" path="/var/lib/kubelet/pods/cf167f51-9893-44e5-99f4-9841055d2e1b/volumes" Feb 26 08:49:17 crc kubenswrapper[4741]: I0226 08:49:17.859578 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f7ea40-66e5-4509-a8aa-4fb66d51ca19" path="/var/lib/kubelet/pods/e2f7ea40-66e5-4509-a8aa-4fb66d51ca19/volumes" Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.065836 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rzqlp"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.088716 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-xlrfl"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.106990 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rzqlp"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.120241 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfe8-account-create-update-96sxz"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.133371 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-dn6fj"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.145806 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-xlrfl"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.158230 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfe8-account-create-update-96sxz"] Feb 26 08:49:18 crc kubenswrapper[4741]: I0226 08:49:18.170584 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-dn6fj"] Feb 26 08:49:19 crc kubenswrapper[4741]: I0226 08:49:19.803732 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275147ec-fc05-4ce6-92e7-f9ed21d8b85a" path="/var/lib/kubelet/pods/275147ec-fc05-4ce6-92e7-f9ed21d8b85a/volumes" Feb 26 08:49:19 crc kubenswrapper[4741]: I0226 08:49:19.804787 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adabc55e-a269-495c-9d28-d8da64354f35" path="/var/lib/kubelet/pods/adabc55e-a269-495c-9d28-d8da64354f35/volumes" Feb 26 08:49:19 crc kubenswrapper[4741]: I0226 08:49:19.806494 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb214ad8-a870-4561-9c32-27c7b3943839" path="/var/lib/kubelet/pods/eb214ad8-a870-4561-9c32-27c7b3943839/volumes" Feb 26 08:49:19 crc kubenswrapper[4741]: I0226 08:49:19.807195 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31535f9-ac90-4d2b-bcc1-445fc2abc892" path="/var/lib/kubelet/pods/f31535f9-ac90-4d2b-bcc1-445fc2abc892/volumes" Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.149310 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.149867 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.149932 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.151413 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.151486 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c" gracePeriod=600 Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.643553 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c" exitCode=0 Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.643837 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c"} Feb 26 08:49:25 crc kubenswrapper[4741]: I0226 08:49:25.643883 4741 scope.go:117] "RemoveContainer" containerID="79f0fa9e22f06c2dc3b3364bd251c5022f94b10451cb8488c41b69fa27f72333" Feb 26 08:49:26 crc kubenswrapper[4741]: I0226 08:49:26.660663 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38"} Feb 26 08:49:39 crc kubenswrapper[4741]: I0226 08:49:39.067208 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9zc9f"] Feb 26 08:49:39 crc kubenswrapper[4741]: I0226 08:49:39.084431 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9zc9f"] Feb 26 08:49:39 crc kubenswrapper[4741]: I0226 08:49:39.805560 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91289eb1-fb29-4b09-9f36-1f5d250f6b39" path="/var/lib/kubelet/pods/91289eb1-fb29-4b09-9f36-1f5d250f6b39/volumes" Feb 26 08:49:43 crc kubenswrapper[4741]: I0226 08:49:43.896647 4741 generic.go:334] "Generic (PLEG): container finished" podID="f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" containerID="481ecbc138e3d7c252a93d6d6db21acdb29f7ddfdc30b2dbbce664600aea9b8f" exitCode=0 Feb 26 08:49:43 crc kubenswrapper[4741]: I0226 08:49:43.896764 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" event={"ID":"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e","Type":"ContainerDied","Data":"481ecbc138e3d7c252a93d6d6db21acdb29f7ddfdc30b2dbbce664600aea9b8f"} Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.493490 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.626535 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam\") pod \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.626731 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5bx5\" (UniqueName: \"kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5\") pod \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.626769 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory\") pod \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.626905 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle\") pod \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\" (UID: \"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e\") " Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.635454 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5" (OuterVolumeSpecName: "kube-api-access-g5bx5") pod "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" (UID: "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e"). InnerVolumeSpecName "kube-api-access-g5bx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.635809 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" (UID: "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.669603 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory" (OuterVolumeSpecName: "inventory") pod "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" (UID: "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.673189 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" (UID: "f9b8a965-3073-4b51-8dfc-a1bdf31ab63e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.731421 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.731467 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5bx5\" (UniqueName: \"kubernetes.io/projected/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-kube-api-access-g5bx5\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.731484 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.731494 4741 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b8a965-3073-4b51-8dfc-a1bdf31ab63e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.927411 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" event={"ID":"f9b8a965-3073-4b51-8dfc-a1bdf31ab63e","Type":"ContainerDied","Data":"609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045"} Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.927513 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045" Feb 26 08:49:45 crc kubenswrapper[4741]: I0226 08:49:45.927453 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65twb" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.066826 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm"] Feb 26 08:49:46 crc kubenswrapper[4741]: E0226 08:49:46.067576 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="extract-content" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067597 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="extract-content" Feb 26 08:49:46 crc kubenswrapper[4741]: E0226 08:49:46.067619 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067626 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" Feb 26 08:49:46 crc kubenswrapper[4741]: E0226 08:49:46.067678 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="extract-utilities" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067685 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="extract-utilities" Feb 26 08:49:46 crc kubenswrapper[4741]: E0226 08:49:46.067705 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067713 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067924 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="297cd5a0-82a7-48d3-b7ce-8ef4f70530d2" containerName="registry-server" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.067964 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b8a965-3073-4b51-8dfc-a1bdf31ab63e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.069176 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.071391 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.071559 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.071864 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.074310 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.079235 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm"] Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.246398 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhlc\" (UniqueName: \"kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.246673 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.247032 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.350549 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqhlc\" (UniqueName: \"kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.350693 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.350781 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.377243 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.377390 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.377535 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqhlc\" (UniqueName: \"kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:46 crc kubenswrapper[4741]: I0226 08:49:46.439241 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:49:47 crc kubenswrapper[4741]: I0226 08:49:47.168962 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm"] Feb 26 08:49:47 crc kubenswrapper[4741]: I0226 08:49:47.957495 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" event={"ID":"7f3e3b88-eb11-45c6-a975-3f8db2941855","Type":"ContainerStarted","Data":"6a12816acb80c5a6476fe3cac15ccdd2056b24d4d49e437075aa3f8a2a2dbc1b"} Feb 26 08:49:48 crc kubenswrapper[4741]: E0226 08:49:48.317760 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:49:48 crc kubenswrapper[4741]: E0226 08:49:48.317747 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:49:48 crc kubenswrapper[4741]: E0226 08:49:48.477912 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:49:48 crc kubenswrapper[4741]: I0226 08:49:48.974241 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" event={"ID":"7f3e3b88-eb11-45c6-a975-3f8db2941855","Type":"ContainerStarted","Data":"f647a23450ad5078db888ca8850de3ba75d4315025b7e4c3404a249ca2eb17f0"} Feb 26 08:49:49 crc kubenswrapper[4741]: I0226 08:49:49.000795 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" podStartSLOduration=2.307459891 podStartE2EDuration="3.000764418s" podCreationTimestamp="2026-02-26 08:49:46 +0000 UTC" firstStartedPulling="2026-02-26 08:49:47.168602455 +0000 UTC m=+2222.164539842" lastFinishedPulling="2026-02-26 08:49:47.861906982 +0000 UTC m=+2222.857844369" observedRunningTime="2026-02-26 08:49:48.992835672 +0000 UTC m=+2223.988773059" watchObservedRunningTime="2026-02-26 08:49:49.000764418 +0000 UTC m=+2223.996701805" Feb 26 08:49:49 crc kubenswrapper[4741]: E0226 08:49:49.222833 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:49:57 crc kubenswrapper[4741]: I0226 08:49:57.861487 4741 scope.go:117] "RemoveContainer" containerID="f17af16b16c06fb36c44e9c40d492057f3ab8b59aaf40c4221e2b545a1921c0a" Feb 26 08:49:57 crc kubenswrapper[4741]: I0226 08:49:57.907835 4741 scope.go:117] "RemoveContainer" containerID="81aa89192a2e11ea8f9025896484c4a7ef67efeca2a43b7274dd5fff20d4b718" Feb 26 08:49:57 crc kubenswrapper[4741]: I0226 08:49:57.973810 4741 scope.go:117] "RemoveContainer" containerID="a35481248c39820381d946da9a88876af06d587c223b7d733121f5b9afdff07c" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.052353 4741 scope.go:117] "RemoveContainer" containerID="5d59b3a2f05c1e1be2c355709693839d893192716e8ab99fa18422e08892be4d" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.121589 4741 scope.go:117] "RemoveContainer" containerID="b661443ad9f0e8c7b58f9674d4034ca1a0b8e81b4e6c7814813f04e9069f6e34" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.198304 4741 scope.go:117] "RemoveContainer" containerID="c3b3b68ab49373e482a2051ddcb2811aa704dc35f5465b979f0925e40661363f" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.247245 4741 scope.go:117] "RemoveContainer" containerID="a0bad3a1c8211b027df83c6aa6b97e1824bb3534c3d7b1ff87e6dca6e08fb0a3" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.284713 4741 scope.go:117] "RemoveContainer" containerID="49492da9e102c1d580506b4d5f4b63b99805c13713c27abd920fecfc7940b33d" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.319389 4741 scope.go:117] "RemoveContainer" containerID="28104f18c3c62903044707c0234661e5649c83daf3d8baf2e3e114c6c1be0b6f" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.360865 4741 scope.go:117] "RemoveContainer" containerID="a2222b0246e682d4a24195ec683d19fca7742be29c67c82c0178b9076de830ee" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.414363 4741 scope.go:117] "RemoveContainer" containerID="c27fb3161fe1f4c1a6196dd49acea1f335f3e9177e9f5c639e3639fee0c27a22" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.451159 4741 scope.go:117] "RemoveContainer" containerID="ef21e7636cab28e61207dd6582b54bfbcf4373f8d2f6002be32f4e3e1854b0c3" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.479878 4741 scope.go:117] "RemoveContainer" containerID="3f3662413c3512f9cd116f9981d6dac17865fdbb696b19dd45b77eeecebe71d5" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.504339 4741 scope.go:117] "RemoveContainer" containerID="ed81ea1b45a6117535ca7d1d4b879dbe45c3a8ed95ea4cb344ab4cef24aeae82" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.546541 4741 scope.go:117] "RemoveContainer" containerID="555218fc003cd61346324b54cbd4fe9a364a6b4ec6b7081dac9d9b85e5786835" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.575807 4741 scope.go:117] "RemoveContainer" containerID="0b079cdf3c864230332df251d5528b31746635305aa928f13abed5063f0d16c1" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.624979 4741 scope.go:117] "RemoveContainer" containerID="583f330f0940479ac4b6a998fa06866efa22ea82372cbf445987db0fe0a40767" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.655143 4741 scope.go:117] "RemoveContainer" containerID="47911601efd911ea46aa92154f47fe5c8423a33303e8ae738bb20a5471054b43" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.696334 4741 scope.go:117] "RemoveContainer" containerID="d1b8da48215504119ad9671b68c4329ad275e573c944c1388fb5f60d5c187ce5" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.725744 4741 scope.go:117] "RemoveContainer" containerID="2e1bac17917a1dd3ce17c4305364665f8cd7f86139ca5fc6371cd98d091a61ea" Feb 26 08:49:58 crc kubenswrapper[4741]: I0226 08:49:58.768030 4741 scope.go:117] "RemoveContainer" containerID="84be32f16f1941d4eabe835f92db1ee80e3917dfe1322f3463776406bf3ee641" Feb 26 08:49:59 crc kubenswrapper[4741]: E0226 08:49:59.587407 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.147770 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534930-vwj4n"] Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.150500 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.153894 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.153995 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.154076 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.165156 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534930-vwj4n"] Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.225031 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl\") pod \"auto-csr-approver-29534930-vwj4n\" (UID: \"357ed6dc-5c4b-486e-91b0-850eed492bb0\") " pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.327655 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl\") pod \"auto-csr-approver-29534930-vwj4n\" (UID: \"357ed6dc-5c4b-486e-91b0-850eed492bb0\") " pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.349076 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl\") pod \"auto-csr-approver-29534930-vwj4n\" (UID: \"357ed6dc-5c4b-486e-91b0-850eed492bb0\") " pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:00 crc kubenswrapper[4741]: I0226 08:50:00.502445 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:01 crc kubenswrapper[4741]: W0226 08:50:01.019573 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod357ed6dc_5c4b_486e_91b0_850eed492bb0.slice/crio-c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5 WatchSource:0}: Error finding container c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5: Status 404 returned error can't find the container with id c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5 Feb 26 08:50:01 crc kubenswrapper[4741]: I0226 08:50:01.024430 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534930-vwj4n"] Feb 26 08:50:01 crc kubenswrapper[4741]: I0226 08:50:01.242779 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" event={"ID":"357ed6dc-5c4b-486e-91b0-850eed492bb0","Type":"ContainerStarted","Data":"c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5"} Feb 26 08:50:03 crc kubenswrapper[4741]: I0226 08:50:03.292760 4741 generic.go:334] "Generic (PLEG): container finished" podID="357ed6dc-5c4b-486e-91b0-850eed492bb0" containerID="1870af5d493460dd7616bfeca83b80d4aa2bdd05c3b9b58ac98e878dd26d8885" exitCode=0 Feb 26 08:50:03 crc kubenswrapper[4741]: I0226 08:50:03.293276 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" event={"ID":"357ed6dc-5c4b-486e-91b0-850eed492bb0","Type":"ContainerDied","Data":"1870af5d493460dd7616bfeca83b80d4aa2bdd05c3b9b58ac98e878dd26d8885"} Feb 26 08:50:03 crc kubenswrapper[4741]: E0226 08:50:03.483444 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:04 crc kubenswrapper[4741]: I0226 08:50:04.924459 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.000691 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl\") pod \"357ed6dc-5c4b-486e-91b0-850eed492bb0\" (UID: \"357ed6dc-5c4b-486e-91b0-850eed492bb0\") " Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.022381 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl" (OuterVolumeSpecName: "kube-api-access-nkhzl") pod "357ed6dc-5c4b-486e-91b0-850eed492bb0" (UID: "357ed6dc-5c4b-486e-91b0-850eed492bb0"). InnerVolumeSpecName "kube-api-access-nkhzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.105429 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/357ed6dc-5c4b-486e-91b0-850eed492bb0-kube-api-access-nkhzl\") on node \"crc\" DevicePath \"\"" Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.320716 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" event={"ID":"357ed6dc-5c4b-486e-91b0-850eed492bb0","Type":"ContainerDied","Data":"c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5"} Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.320778 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534930-vwj4n" Feb 26 08:50:05 crc kubenswrapper[4741]: I0226 08:50:05.320793 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8baed57a3e6c206526fb44f6c487cef6c9edae61be0da2c053731960f1158d5" Feb 26 08:50:06 crc kubenswrapper[4741]: I0226 08:50:06.053010 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534924-qblgp"] Feb 26 08:50:06 crc kubenswrapper[4741]: I0226 08:50:06.066007 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534924-qblgp"] Feb 26 08:50:07 crc kubenswrapper[4741]: I0226 08:50:07.801786 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458edee3-5c0d-45a1-93e3-80a518d7a3e8" path="/var/lib/kubelet/pods/458edee3-5c0d-45a1-93e3-80a518d7a3e8/volumes" Feb 26 08:50:10 crc kubenswrapper[4741]: E0226 08:50:10.008684 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:18 crc kubenswrapper[4741]: E0226 08:50:18.810539 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:20 crc kubenswrapper[4741]: E0226 08:50:20.085217 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:30 crc kubenswrapper[4741]: E0226 08:50:30.418662 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:33 crc kubenswrapper[4741]: I0226 08:50:33.057214 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wz7kr"] Feb 26 08:50:33 crc kubenswrapper[4741]: I0226 08:50:33.074620 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wz7kr"] Feb 26 08:50:33 crc kubenswrapper[4741]: E0226 08:50:33.482157 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:33 crc kubenswrapper[4741]: I0226 08:50:33.806971 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325772f-698b-4597-8918-0f46f598545e" path="/var/lib/kubelet/pods/2325772f-698b-4597-8918-0f46f598545e/volumes" Feb 26 08:50:40 crc kubenswrapper[4741]: E0226 08:50:40.749546 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice/crio-609a37298219819dffe63349940464d6c84711c0f1fe4c7d053ffb21504e3045\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b8a965_3073_4b51_8dfc_a1bdf31ab63e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 08:50:49 crc kubenswrapper[4741]: I0226 08:50:49.054935 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hfwv4"] Feb 26 08:50:49 crc kubenswrapper[4741]: I0226 08:50:49.075053 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hfwv4"] Feb 26 08:50:49 crc kubenswrapper[4741]: I0226 08:50:49.821488 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e2e3de-8ab8-4670-b4c5-6375011e04e7" path="/var/lib/kubelet/pods/16e2e3de-8ab8-4670-b4c5-6375011e04e7/volumes" Feb 26 08:50:52 crc kubenswrapper[4741]: I0226 08:50:52.041761 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4w2q8"] Feb 26 08:50:52 crc kubenswrapper[4741]: I0226 08:50:52.060445 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4w2q8"] Feb 26 08:50:53 crc kubenswrapper[4741]: I0226 08:50:53.802507 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbbec2b-c3f8-406b-ac5a-6c9749b1631d" path="/var/lib/kubelet/pods/3fbbec2b-c3f8-406b-ac5a-6c9749b1631d/volumes" Feb 26 08:50:59 crc kubenswrapper[4741]: I0226 08:50:59.428985 4741 scope.go:117] "RemoveContainer" containerID="7d2d5c20e4d9d41731ee614841e831503fb1b42a2e5f3c23d08147b0a2021c5f" Feb 26 08:50:59 crc kubenswrapper[4741]: I0226 08:50:59.485487 4741 scope.go:117] "RemoveContainer" containerID="b0ba8ab0528f3974a0418972692b89cb4c16621b64431f99c865fe7c4a74fc3f" Feb 26 08:50:59 crc kubenswrapper[4741]: I0226 08:50:59.539540 4741 scope.go:117] "RemoveContainer" containerID="ef1e45fe0d040c52aaf7160a0307b78d1f5dbd00d5b48914ae3125bcb0e7ab5d" Feb 26 08:50:59 crc kubenswrapper[4741]: I0226 08:50:59.600926 4741 scope.go:117] "RemoveContainer" containerID="91c0e440581e76678224111a00fdd7560adc835d87192175671a3f0a096b4a80" Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.077807 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sd2kk"] Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.102608 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8mgkv"] Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.118946 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8mgkv"] Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.134612 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sd2kk"] Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.805349 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03befef7-03ec-47b0-b178-46e527d8198e" path="/var/lib/kubelet/pods/03befef7-03ec-47b0-b178-46e527d8198e/volumes" Feb 26 08:51:03 crc kubenswrapper[4741]: I0226 08:51:03.806708 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453e119a-80ff-4c19-b7d0-0860410fcc09" path="/var/lib/kubelet/pods/453e119a-80ff-4c19-b7d0-0860410fcc09/volumes" Feb 26 08:51:07 crc kubenswrapper[4741]: I0226 08:51:07.040165 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wbg8p"] Feb 26 08:51:07 crc kubenswrapper[4741]: I0226 08:51:07.053082 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wbg8p"] Feb 26 08:51:07 crc kubenswrapper[4741]: I0226 08:51:07.806103 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1395bb-ffb5-492e-b214-4434c210acf7" path="/var/lib/kubelet/pods/4d1395bb-ffb5-492e-b214-4434c210acf7/volumes" Feb 26 08:51:25 crc kubenswrapper[4741]: I0226 08:51:25.149481 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:51:25 crc kubenswrapper[4741]: I0226 08:51:25.150465 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.274410 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:51:41 crc kubenswrapper[4741]: E0226 08:51:41.275703 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357ed6dc-5c4b-486e-91b0-850eed492bb0" containerName="oc" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.275747 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="357ed6dc-5c4b-486e-91b0-850eed492bb0" containerName="oc" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.276189 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="357ed6dc-5c4b-486e-91b0-850eed492bb0" containerName="oc" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.283337 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.356314 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.452156 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnrh\" (UniqueName: \"kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.452441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.452758 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.555182 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnrh\" (UniqueName: \"kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.555317 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.555434 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.556059 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.556098 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.579076 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnrh\" (UniqueName: \"kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh\") pod \"redhat-marketplace-xjs8w\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:41 crc kubenswrapper[4741]: I0226 08:51:41.609944 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:42 crc kubenswrapper[4741]: I0226 08:51:42.168057 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:51:42 crc kubenswrapper[4741]: I0226 08:51:42.712212 4741 generic.go:334] "Generic (PLEG): container finished" podID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerID="8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794" exitCode=0 Feb 26 08:51:42 crc kubenswrapper[4741]: I0226 08:51:42.712273 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerDied","Data":"8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794"} Feb 26 08:51:42 crc kubenswrapper[4741]: I0226 08:51:42.712827 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerStarted","Data":"2a0a15aa7753e4a6c0b75c2745202e37b4ad13185b741833518a154830c60925"} Feb 26 08:51:43 crc kubenswrapper[4741]: I0226 08:51:43.728483 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerStarted","Data":"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a"} Feb 26 08:51:45 crc kubenswrapper[4741]: I0226 08:51:45.758254 4741 generic.go:334] "Generic (PLEG): container finished" podID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerID="9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a" exitCode=0 Feb 26 08:51:45 crc kubenswrapper[4741]: I0226 08:51:45.758360 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerDied","Data":"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a"} Feb 26 08:51:47 crc kubenswrapper[4741]: I0226 08:51:47.816541 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerStarted","Data":"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899"} Feb 26 08:51:47 crc kubenswrapper[4741]: I0226 08:51:47.858405 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjs8w" podStartSLOduration=2.65378998 podStartE2EDuration="6.858377959s" podCreationTimestamp="2026-02-26 08:51:41 +0000 UTC" firstStartedPulling="2026-02-26 08:51:42.715142794 +0000 UTC m=+2337.711080181" lastFinishedPulling="2026-02-26 08:51:46.919730773 +0000 UTC m=+2341.915668160" observedRunningTime="2026-02-26 08:51:47.845311937 +0000 UTC m=+2342.841249334" watchObservedRunningTime="2026-02-26 08:51:47.858377959 +0000 UTC m=+2342.854315346" Feb 26 08:51:51 crc kubenswrapper[4741]: I0226 08:51:51.610318 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:51 crc kubenswrapper[4741]: I0226 08:51:51.610685 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:51 crc kubenswrapper[4741]: I0226 08:51:51.664306 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:51:54 crc kubenswrapper[4741]: I0226 08:51:54.879518 4741 generic.go:334] "Generic (PLEG): container finished" podID="7f3e3b88-eb11-45c6-a975-3f8db2941855" containerID="f647a23450ad5078db888ca8850de3ba75d4315025b7e4c3404a249ca2eb17f0" exitCode=0 Feb 26 08:51:54 crc kubenswrapper[4741]: I0226 08:51:54.879634 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" event={"ID":"7f3e3b88-eb11-45c6-a975-3f8db2941855","Type":"ContainerDied","Data":"f647a23450ad5078db888ca8850de3ba75d4315025b7e4c3404a249ca2eb17f0"} Feb 26 08:51:55 crc kubenswrapper[4741]: I0226 08:51:55.149636 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:51:55 crc kubenswrapper[4741]: I0226 08:51:55.149722 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.398325 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.431747 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqhlc\" (UniqueName: \"kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc\") pod \"7f3e3b88-eb11-45c6-a975-3f8db2941855\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.431811 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam\") pod \"7f3e3b88-eb11-45c6-a975-3f8db2941855\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.431840 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory\") pod \"7f3e3b88-eb11-45c6-a975-3f8db2941855\" (UID: \"7f3e3b88-eb11-45c6-a975-3f8db2941855\") " Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.437991 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc" (OuterVolumeSpecName: "kube-api-access-hqhlc") pod "7f3e3b88-eb11-45c6-a975-3f8db2941855" (UID: "7f3e3b88-eb11-45c6-a975-3f8db2941855"). InnerVolumeSpecName "kube-api-access-hqhlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.477711 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f3e3b88-eb11-45c6-a975-3f8db2941855" (UID: "7f3e3b88-eb11-45c6-a975-3f8db2941855"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.482149 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory" (OuterVolumeSpecName: "inventory") pod "7f3e3b88-eb11-45c6-a975-3f8db2941855" (UID: "7f3e3b88-eb11-45c6-a975-3f8db2941855"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.536026 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqhlc\" (UniqueName: \"kubernetes.io/projected/7f3e3b88-eb11-45c6-a975-3f8db2941855-kube-api-access-hqhlc\") on node \"crc\" DevicePath \"\"" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.536330 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.536396 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3e3b88-eb11-45c6-a975-3f8db2941855-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.931522 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" event={"ID":"7f3e3b88-eb11-45c6-a975-3f8db2941855","Type":"ContainerDied","Data":"6a12816acb80c5a6476fe3cac15ccdd2056b24d4d49e437075aa3f8a2a2dbc1b"} Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.931596 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a12816acb80c5a6476fe3cac15ccdd2056b24d4d49e437075aa3f8a2a2dbc1b" Feb 26 08:51:56 crc kubenswrapper[4741]: I0226 08:51:56.931738 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.023449 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk"] Feb 26 08:51:57 crc kubenswrapper[4741]: E0226 08:51:57.024207 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3e3b88-eb11-45c6-a975-3f8db2941855" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.024226 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3e3b88-eb11-45c6-a975-3f8db2941855" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.024496 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3e3b88-eb11-45c6-a975-3f8db2941855" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.025676 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.033959 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.034394 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.034579 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.038224 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.040555 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk"] Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.069555 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.069669 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfhz\" (UniqueName: \"kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.069833 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.173133 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.173240 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szfhz\" (UniqueName: \"kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.173422 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.179844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.179896 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.192378 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szfhz\" (UniqueName: \"kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.360451 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:51:57 crc kubenswrapper[4741]: I0226 08:51:57.965031 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk"] Feb 26 08:51:58 crc kubenswrapper[4741]: I0226 08:51:58.956492 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" event={"ID":"a34fc776-add8-4082-8fa8-041ac3ee8860","Type":"ContainerStarted","Data":"86a44e719d4db48cb2347344a685c45283768db5683efcd2a92356b5b91e3eec"} Feb 26 08:51:58 crc kubenswrapper[4741]: I0226 08:51:58.956810 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" event={"ID":"a34fc776-add8-4082-8fa8-041ac3ee8860","Type":"ContainerStarted","Data":"86cd61bd4304c08b558a3c0c997e5c0845f29e545c009effb6339b992b718bc3"} Feb 26 08:51:58 crc kubenswrapper[4741]: I0226 08:51:58.986175 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" podStartSLOduration=2.5461377499999998 podStartE2EDuration="2.986142833s" podCreationTimestamp="2026-02-26 08:51:56 +0000 UTC" firstStartedPulling="2026-02-26 08:51:57.96432436 +0000 UTC m=+2352.960261747" lastFinishedPulling="2026-02-26 08:51:58.404329443 +0000 UTC m=+2353.400266830" observedRunningTime="2026-02-26 08:51:58.978468424 +0000 UTC m=+2353.974405811" watchObservedRunningTime="2026-02-26 08:51:58.986142833 +0000 UTC m=+2353.982080230" Feb 26 08:51:59 crc kubenswrapper[4741]: I0226 08:51:59.808329 4741 scope.go:117] "RemoveContainer" containerID="6054d73673f66b7d855edaf87e94603cff02a30938e643fbfdca26b130d21776" Feb 26 08:51:59 crc kubenswrapper[4741]: I0226 08:51:59.840846 4741 scope.go:117] "RemoveContainer" containerID="54d5013073b7db0d99842fab02cef1ebb2539a4cfc20185c8d026ff06e88f931" Feb 26 08:51:59 crc kubenswrapper[4741]: I0226 08:51:59.912235 4741 scope.go:117] "RemoveContainer" containerID="7a76fc68ece25bc32a4f24d0da7b48e65616e40b69139b1ac12435a66b804ab3" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.145851 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534932-cvlg9"] Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.147996 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.150646 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.151137 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.151210 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.165589 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534932-cvlg9"] Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.263704 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fdx\" (UniqueName: \"kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx\") pod \"auto-csr-approver-29534932-cvlg9\" (UID: \"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8\") " pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.367575 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52fdx\" (UniqueName: \"kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx\") pod \"auto-csr-approver-29534932-cvlg9\" (UID: \"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8\") " pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.391388 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52fdx\" (UniqueName: \"kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx\") pod \"auto-csr-approver-29534932-cvlg9\" (UID: \"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8\") " pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.474846 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:00 crc kubenswrapper[4741]: I0226 08:52:00.976597 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534932-cvlg9"] Feb 26 08:52:00 crc kubenswrapper[4741]: W0226 08:52:00.980035 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacaa57c7_2dc7_49a7_8e6b_a661cfbfe6e8.slice/crio-057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109 WatchSource:0}: Error finding container 057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109: Status 404 returned error can't find the container with id 057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109 Feb 26 08:52:01 crc kubenswrapper[4741]: I0226 08:52:01.013157 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" event={"ID":"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8","Type":"ContainerStarted","Data":"057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109"} Feb 26 08:52:01 crc kubenswrapper[4741]: I0226 08:52:01.673839 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:52:01 crc kubenswrapper[4741]: I0226 08:52:01.762805 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.025844 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjs8w" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="registry-server" containerID="cri-o://27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899" gracePeriod=2 Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.657351 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.769000 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dnrh\" (UniqueName: \"kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh\") pod \"d23ca124-67fb-4074-8c5d-23360fb0ef04\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.769549 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities\") pod \"d23ca124-67fb-4074-8c5d-23360fb0ef04\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.769811 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content\") pod \"d23ca124-67fb-4074-8c5d-23360fb0ef04\" (UID: \"d23ca124-67fb-4074-8c5d-23360fb0ef04\") " Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.774284 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities" (OuterVolumeSpecName: "utilities") pod "d23ca124-67fb-4074-8c5d-23360fb0ef04" (UID: "d23ca124-67fb-4074-8c5d-23360fb0ef04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.782083 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh" (OuterVolumeSpecName: "kube-api-access-2dnrh") pod "d23ca124-67fb-4074-8c5d-23360fb0ef04" (UID: "d23ca124-67fb-4074-8c5d-23360fb0ef04"). InnerVolumeSpecName "kube-api-access-2dnrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.820366 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d23ca124-67fb-4074-8c5d-23360fb0ef04" (UID: "d23ca124-67fb-4074-8c5d-23360fb0ef04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.887852 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dnrh\" (UniqueName: \"kubernetes.io/projected/d23ca124-67fb-4074-8c5d-23360fb0ef04-kube-api-access-2dnrh\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.887897 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:02 crc kubenswrapper[4741]: I0226 08:52:02.887910 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23ca124-67fb-4074-8c5d-23360fb0ef04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.047045 4741 generic.go:334] "Generic (PLEG): container finished" podID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerID="27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899" exitCode=0 Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.047148 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerDied","Data":"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899"} Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.047188 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjs8w" event={"ID":"d23ca124-67fb-4074-8c5d-23360fb0ef04","Type":"ContainerDied","Data":"2a0a15aa7753e4a6c0b75c2745202e37b4ad13185b741833518a154830c60925"} Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.047212 4741 scope.go:117] "RemoveContainer" containerID="27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.047525 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjs8w" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.079749 4741 scope.go:117] "RemoveContainer" containerID="9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.110663 4741 scope.go:117] "RemoveContainer" containerID="8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.112039 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.137892 4741 scope.go:117] "RemoveContainer" containerID="27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899" Feb 26 08:52:03 crc kubenswrapper[4741]: E0226 08:52:03.139022 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899\": container with ID starting with 27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899 not found: ID does not exist" containerID="27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.139126 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899"} err="failed to get container status \"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899\": rpc error: code = NotFound desc = could not find container \"27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899\": container with ID starting with 27aac1da7a17c970d86f7d6a27405bfc0ce931390e81b19e7accb1c51ffa4899 not found: ID does not exist" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.139168 4741 scope.go:117] "RemoveContainer" containerID="9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a" Feb 26 08:52:03 crc kubenswrapper[4741]: E0226 08:52:03.139689 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a\": container with ID starting with 9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a not found: ID does not exist" containerID="9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.139757 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a"} err="failed to get container status \"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a\": rpc error: code = NotFound desc = could not find container \"9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a\": container with ID starting with 9e481229feda3e7bbf1c705ae05646145afd139de110af3932adf8fd75f4648a not found: ID does not exist" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.139792 4741 scope.go:117] "RemoveContainer" containerID="8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794" Feb 26 08:52:03 crc kubenswrapper[4741]: E0226 08:52:03.140176 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794\": container with ID starting with 8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794 not found: ID does not exist" containerID="8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.140298 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794"} err="failed to get container status \"8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794\": rpc error: code = NotFound desc = could not find container \"8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794\": container with ID starting with 8b0fb80113e54f765f5492ce11c3c29d6e45df6616ba7f51083dc6ea7e563794 not found: ID does not exist" Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.146305 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjs8w"] Feb 26 08:52:03 crc kubenswrapper[4741]: I0226 08:52:03.807545 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" path="/var/lib/kubelet/pods/d23ca124-67fb-4074-8c5d-23360fb0ef04/volumes" Feb 26 08:52:04 crc kubenswrapper[4741]: I0226 08:52:04.064236 4741 generic.go:334] "Generic (PLEG): container finished" podID="acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" containerID="752732535d2ef0e171700bf129f4addff73f1eaa7ea5acb8a22a8bc7de6ad8a1" exitCode=0 Feb 26 08:52:04 crc kubenswrapper[4741]: I0226 08:52:04.064290 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" event={"ID":"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8","Type":"ContainerDied","Data":"752732535d2ef0e171700bf129f4addff73f1eaa7ea5acb8a22a8bc7de6ad8a1"} Feb 26 08:52:05 crc kubenswrapper[4741]: I0226 08:52:05.496325 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:05 crc kubenswrapper[4741]: I0226 08:52:05.585462 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52fdx\" (UniqueName: \"kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx\") pod \"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8\" (UID: \"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8\") " Feb 26 08:52:05 crc kubenswrapper[4741]: I0226 08:52:05.591869 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx" (OuterVolumeSpecName: "kube-api-access-52fdx") pod "acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" (UID: "acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8"). InnerVolumeSpecName "kube-api-access-52fdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:52:05 crc kubenswrapper[4741]: I0226 08:52:05.689855 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52fdx\" (UniqueName: \"kubernetes.io/projected/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8-kube-api-access-52fdx\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:06 crc kubenswrapper[4741]: I0226 08:52:06.093537 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" event={"ID":"acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8","Type":"ContainerDied","Data":"057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109"} Feb 26 08:52:06 crc kubenswrapper[4741]: I0226 08:52:06.093603 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057b342a69bfc5a8827b5da98a160e3d1532c858187d527bf88c17d4aa1e6109" Feb 26 08:52:06 crc kubenswrapper[4741]: I0226 08:52:06.093634 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534932-cvlg9" Feb 26 08:52:06 crc kubenswrapper[4741]: I0226 08:52:06.584961 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534926-9zsrt"] Feb 26 08:52:06 crc kubenswrapper[4741]: I0226 08:52:06.597171 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534926-9zsrt"] Feb 26 08:52:07 crc kubenswrapper[4741]: I0226 08:52:07.868909 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd25174-ea65-487c-841f-9055a74a398f" path="/var/lib/kubelet/pods/2cd25174-ea65-487c-841f-9055a74a398f/volumes" Feb 26 08:52:14 crc kubenswrapper[4741]: I0226 08:52:14.038364 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-cca3-account-create-update-tklhs"] Feb 26 08:52:14 crc kubenswrapper[4741]: I0226 08:52:14.052734 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-cca3-account-create-update-tklhs"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.050855 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2ljhv"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.070128 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0bee-account-create-update-d8ghs"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.086278 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2sbbs"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.098471 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-z9pg7"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.109826 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2ljhv"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.121193 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-z9pg7"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.132202 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2sbbs"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.152510 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0bee-account-create-update-d8ghs"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.168500 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3fe0-account-create-update-mwptz"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.179642 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3fe0-account-create-update-mwptz"] Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.802508 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a81300-f955-4bce-9a32-e60e7a391588" path="/var/lib/kubelet/pods/07a81300-f955-4bce-9a32-e60e7a391588/volumes" Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.803386 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a73675-9d8d-447a-ad06-b626a8016195" path="/var/lib/kubelet/pods/13a73675-9d8d-447a-ad06-b626a8016195/volumes" Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.804225 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6" path="/var/lib/kubelet/pods/1bd67a8a-b7f9-4d2f-974c-fe27e7efa3b6/volumes" Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.804850 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4218990d-060f-4a94-8f4c-980bb124cfc8" path="/var/lib/kubelet/pods/4218990d-060f-4a94-8f4c-980bb124cfc8/volumes" Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.807435 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b461e3b4-2bbe-4870-b388-b6235c3c0a22" path="/var/lib/kubelet/pods/b461e3b4-2bbe-4870-b388-b6235c3c0a22/volumes" Feb 26 08:52:15 crc kubenswrapper[4741]: I0226 08:52:15.808033 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa90358f-3b47-4d6f-a363-9399e7472b60" path="/var/lib/kubelet/pods/fa90358f-3b47-4d6f-a363-9399e7472b60/volumes" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.149057 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.150062 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.150142 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.151347 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.151418 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" gracePeriod=600 Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.282387 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.339993 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" exitCode=0 Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.340049 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38"} Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.340216 4741 scope.go:117] "RemoveContainer" containerID="3b89194ce392f941126adc3f41706d6d90ecd60e8c8587e461ea2236baa6d97c" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.341624 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.341988 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.488315 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.489369 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" containerName="oc" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489395 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" containerName="oc" Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.489436 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="extract-utilities" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489444 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="extract-utilities" Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.489466 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="extract-content" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489474 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="extract-content" Feb 26 08:52:25 crc kubenswrapper[4741]: E0226 08:52:25.489496 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="registry-server" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489504 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="registry-server" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489791 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23ca124-67fb-4074-8c5d-23360fb0ef04" containerName="registry-server" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.489820 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" containerName="oc" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.491979 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.503556 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.529295 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.529441 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pf2p\" (UniqueName: \"kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.530003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.633331 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.633913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pf2p\" (UniqueName: \"kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.634028 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.634377 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.634698 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.659057 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pf2p\" (UniqueName: \"kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p\") pod \"certified-operators-c2lkz\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:25 crc kubenswrapper[4741]: I0226 08:52:25.834335 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:26 crc kubenswrapper[4741]: I0226 08:52:26.445664 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:27 crc kubenswrapper[4741]: I0226 08:52:27.373881 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerID="97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816" exitCode=0 Feb 26 08:52:27 crc kubenswrapper[4741]: I0226 08:52:27.374045 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerDied","Data":"97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816"} Feb 26 08:52:27 crc kubenswrapper[4741]: I0226 08:52:27.374305 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerStarted","Data":"4de477b026b2025ef92c593cff5c56ef292959968f961970120678225fa5bb7a"} Feb 26 08:52:29 crc kubenswrapper[4741]: I0226 08:52:29.416261 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerStarted","Data":"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa"} Feb 26 08:52:32 crc kubenswrapper[4741]: I0226 08:52:32.500927 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerID="c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa" exitCode=0 Feb 26 08:52:32 crc kubenswrapper[4741]: I0226 08:52:32.500989 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerDied","Data":"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa"} Feb 26 08:52:33 crc kubenswrapper[4741]: I0226 08:52:33.514925 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerStarted","Data":"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f"} Feb 26 08:52:33 crc kubenswrapper[4741]: I0226 08:52:33.547856 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c2lkz" podStartSLOduration=3.010832962 podStartE2EDuration="8.547725511s" podCreationTimestamp="2026-02-26 08:52:25 +0000 UTC" firstStartedPulling="2026-02-26 08:52:27.37618606 +0000 UTC m=+2382.372123447" lastFinishedPulling="2026-02-26 08:52:32.913078609 +0000 UTC m=+2387.909015996" observedRunningTime="2026-02-26 08:52:33.538312403 +0000 UTC m=+2388.534249800" watchObservedRunningTime="2026-02-26 08:52:33.547725511 +0000 UTC m=+2388.543662908" Feb 26 08:52:35 crc kubenswrapper[4741]: I0226 08:52:35.835448 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:35 crc kubenswrapper[4741]: I0226 08:52:35.838709 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:35 crc kubenswrapper[4741]: I0226 08:52:35.904716 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:38 crc kubenswrapper[4741]: I0226 08:52:38.788025 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:52:38 crc kubenswrapper[4741]: E0226 08:52:38.789174 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:52:45 crc kubenswrapper[4741]: I0226 08:52:45.889721 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:45 crc kubenswrapper[4741]: I0226 08:52:45.971962 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:46 crc kubenswrapper[4741]: I0226 08:52:46.687305 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c2lkz" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="registry-server" containerID="cri-o://edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f" gracePeriod=2 Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.333171 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.529994 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities\") pod \"3b346ed6-5901-4a37-b887-afb226e1ab86\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.530229 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pf2p\" (UniqueName: \"kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p\") pod \"3b346ed6-5901-4a37-b887-afb226e1ab86\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.530395 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content\") pod \"3b346ed6-5901-4a37-b887-afb226e1ab86\" (UID: \"3b346ed6-5901-4a37-b887-afb226e1ab86\") " Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.531747 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities" (OuterVolumeSpecName: "utilities") pod "3b346ed6-5901-4a37-b887-afb226e1ab86" (UID: "3b346ed6-5901-4a37-b887-afb226e1ab86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.538213 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p" (OuterVolumeSpecName: "kube-api-access-4pf2p") pod "3b346ed6-5901-4a37-b887-afb226e1ab86" (UID: "3b346ed6-5901-4a37-b887-afb226e1ab86"). InnerVolumeSpecName "kube-api-access-4pf2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.589883 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b346ed6-5901-4a37-b887-afb226e1ab86" (UID: "3b346ed6-5901-4a37-b887-afb226e1ab86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.634397 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.634444 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b346ed6-5901-4a37-b887-afb226e1ab86-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.634459 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pf2p\" (UniqueName: \"kubernetes.io/projected/3b346ed6-5901-4a37-b887-afb226e1ab86-kube-api-access-4pf2p\") on node \"crc\" DevicePath \"\"" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.699828 4741 generic.go:334] "Generic (PLEG): container finished" podID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerID="edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f" exitCode=0 Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.699897 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerDied","Data":"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f"} Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.699936 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2lkz" event={"ID":"3b346ed6-5901-4a37-b887-afb226e1ab86","Type":"ContainerDied","Data":"4de477b026b2025ef92c593cff5c56ef292959968f961970120678225fa5bb7a"} Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.699955 4741 scope.go:117] "RemoveContainer" containerID="edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.700139 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2lkz" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.743149 4741 scope.go:117] "RemoveContainer" containerID="c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.773921 4741 scope.go:117] "RemoveContainer" containerID="97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.812843 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.815527 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c2lkz"] Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.838765 4741 scope.go:117] "RemoveContainer" containerID="edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f" Feb 26 08:52:47 crc kubenswrapper[4741]: E0226 08:52:47.839511 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f\": container with ID starting with edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f not found: ID does not exist" containerID="edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.839552 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f"} err="failed to get container status \"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f\": rpc error: code = NotFound desc = could not find container \"edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f\": container with ID starting with edcd406ac789e77e7eaec458749fc436c2fee62e3f1d79ed215f902f24e8f69f not found: ID does not exist" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.839578 4741 scope.go:117] "RemoveContainer" containerID="c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa" Feb 26 08:52:47 crc kubenswrapper[4741]: E0226 08:52:47.839980 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa\": container with ID starting with c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa not found: ID does not exist" containerID="c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.840029 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa"} err="failed to get container status \"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa\": rpc error: code = NotFound desc = could not find container \"c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa\": container with ID starting with c4d219ad6320acb8eb142f426b7449a4af24c2936c5750bafafe72ef3ba346aa not found: ID does not exist" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.840063 4741 scope.go:117] "RemoveContainer" containerID="97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816" Feb 26 08:52:47 crc kubenswrapper[4741]: E0226 08:52:47.840434 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816\": container with ID starting with 97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816 not found: ID does not exist" containerID="97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816" Feb 26 08:52:47 crc kubenswrapper[4741]: I0226 08:52:47.840484 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816"} err="failed to get container status \"97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816\": rpc error: code = NotFound desc = could not find container \"97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816\": container with ID starting with 97027c1f27492e8add1d56f58d9fcb7a4c7b0e2c41606b70a54cfb01e0e5c816 not found: ID does not exist" Feb 26 08:52:48 crc kubenswrapper[4741]: I0226 08:52:48.058253 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ctw5f"] Feb 26 08:52:48 crc kubenswrapper[4741]: I0226 08:52:48.074742 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ctw5f"] Feb 26 08:52:49 crc kubenswrapper[4741]: I0226 08:52:49.803964 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" path="/var/lib/kubelet/pods/3b346ed6-5901-4a37-b887-afb226e1ab86/volumes" Feb 26 08:52:49 crc kubenswrapper[4741]: I0226 08:52:49.805706 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11a47b6-7849-4cb0-9b25-c0a26225fba2" path="/var/lib/kubelet/pods/e11a47b6-7849-4cb0-9b25-c0a26225fba2/volumes" Feb 26 08:52:50 crc kubenswrapper[4741]: I0226 08:52:50.787430 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:52:50 crc kubenswrapper[4741]: E0226 08:52:50.788285 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:52:56 crc kubenswrapper[4741]: I0226 08:52:56.049291 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-q775j"] Feb 26 08:52:56 crc kubenswrapper[4741]: I0226 08:52:56.064868 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-3ddd-account-create-update-28w7b"] Feb 26 08:52:56 crc kubenswrapper[4741]: I0226 08:52:56.078554 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-3ddd-account-create-update-28w7b"] Feb 26 08:52:56 crc kubenswrapper[4741]: I0226 08:52:56.089541 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-q775j"] Feb 26 08:52:57 crc kubenswrapper[4741]: I0226 08:52:57.802886 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c3d5ff-f72f-43b9-8df5-be734ffa83c0" path="/var/lib/kubelet/pods/87c3d5ff-f72f-43b9-8df5-be734ffa83c0/volumes" Feb 26 08:52:57 crc kubenswrapper[4741]: I0226 08:52:57.804586 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78221f1-6070-4847-9f6e-88867af64c21" path="/var/lib/kubelet/pods/a78221f1-6070-4847-9f6e-88867af64c21/volumes" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.079032 4741 scope.go:117] "RemoveContainer" containerID="7ceb6bd914989c1e1c6422103aa9d7f4e41b5d1dc3dfc6ecb4abf3a10930a92a" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.110453 4741 scope.go:117] "RemoveContainer" containerID="d2508e3043d422c51dad3287a50dcb5f2e3fd5949676282f80de18f77a47bac9" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.185498 4741 scope.go:117] "RemoveContainer" containerID="b1e1ba51f483d3e3eb5e0897946950859f8a93537a6f34d9e4bea5c122f84dba" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.275733 4741 scope.go:117] "RemoveContainer" containerID="b0b14c2a5b1fb83519812d583e681072de2d28ac03a566754d0dc85a451c4896" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.317567 4741 scope.go:117] "RemoveContainer" containerID="14dd285811cd230dcd3c517e6f9ecb1c0fc1ea4cc2021fe67587d31e0a03081b" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.396156 4741 scope.go:117] "RemoveContainer" containerID="79b76d2bd414c769c9c2b17526395c32dc47e129a295b6971757049b77e4efa0" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.466120 4741 scope.go:117] "RemoveContainer" containerID="a78128872fc1697606acff0159f49ac8ac50a0a78b9eac8850456253f9553607" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.505157 4741 scope.go:117] "RemoveContainer" containerID="f6e30e4bca77738a24da45ae16cc07c59725f9e44aad36863fd5211b0bff2b29" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.536605 4741 scope.go:117] "RemoveContainer" containerID="4e71fa392aa862d748993198d27b098a9bd27b7900b08992431c6646099005c9" Feb 26 08:53:00 crc kubenswrapper[4741]: I0226 08:53:00.577553 4741 scope.go:117] "RemoveContainer" containerID="032f8a00d3f9d8215aafc1dc7034d5b33f6cf2b27a388ae3b54ee4216efcda61" Feb 26 08:53:01 crc kubenswrapper[4741]: I0226 08:53:01.789311 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:53:01 crc kubenswrapper[4741]: E0226 08:53:01.790387 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:53:11 crc kubenswrapper[4741]: I0226 08:53:11.023589 4741 generic.go:334] "Generic (PLEG): container finished" podID="a34fc776-add8-4082-8fa8-041ac3ee8860" containerID="86a44e719d4db48cb2347344a685c45283768db5683efcd2a92356b5b91e3eec" exitCode=0 Feb 26 08:53:11 crc kubenswrapper[4741]: I0226 08:53:11.024196 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" event={"ID":"a34fc776-add8-4082-8fa8-041ac3ee8860","Type":"ContainerDied","Data":"86a44e719d4db48cb2347344a685c45283768db5683efcd2a92356b5b91e3eec"} Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.560691 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.645736 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam\") pod \"a34fc776-add8-4082-8fa8-041ac3ee8860\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.645789 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szfhz\" (UniqueName: \"kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz\") pod \"a34fc776-add8-4082-8fa8-041ac3ee8860\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.646721 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory\") pod \"a34fc776-add8-4082-8fa8-041ac3ee8860\" (UID: \"a34fc776-add8-4082-8fa8-041ac3ee8860\") " Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.685368 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz" (OuterVolumeSpecName: "kube-api-access-szfhz") pod "a34fc776-add8-4082-8fa8-041ac3ee8860" (UID: "a34fc776-add8-4082-8fa8-041ac3ee8860"). InnerVolumeSpecName "kube-api-access-szfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.751800 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szfhz\" (UniqueName: \"kubernetes.io/projected/a34fc776-add8-4082-8fa8-041ac3ee8860-kube-api-access-szfhz\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.774773 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory" (OuterVolumeSpecName: "inventory") pod "a34fc776-add8-4082-8fa8-041ac3ee8860" (UID: "a34fc776-add8-4082-8fa8-041ac3ee8860"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.782341 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a34fc776-add8-4082-8fa8-041ac3ee8860" (UID: "a34fc776-add8-4082-8fa8-041ac3ee8860"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.855122 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:12 crc kubenswrapper[4741]: I0226 08:53:12.855166 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34fc776-add8-4082-8fa8-041ac3ee8860-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.050612 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" event={"ID":"a34fc776-add8-4082-8fa8-041ac3ee8860","Type":"ContainerDied","Data":"86cd61bd4304c08b558a3c0c997e5c0845f29e545c009effb6339b992b718bc3"} Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.050664 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cd61bd4304c08b558a3c0c997e5c0845f29e545c009effb6339b992b718bc3" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.050737 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.161926 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7"] Feb 26 08:53:13 crc kubenswrapper[4741]: E0226 08:53:13.162554 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="extract-utilities" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162575 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="extract-utilities" Feb 26 08:53:13 crc kubenswrapper[4741]: E0226 08:53:13.162621 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="extract-content" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162628 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="extract-content" Feb 26 08:53:13 crc kubenswrapper[4741]: E0226 08:53:13.162654 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="registry-server" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162663 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="registry-server" Feb 26 08:53:13 crc kubenswrapper[4741]: E0226 08:53:13.162680 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34fc776-add8-4082-8fa8-041ac3ee8860" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162689 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34fc776-add8-4082-8fa8-041ac3ee8860" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162928 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b346ed6-5901-4a37-b887-afb226e1ab86" containerName="registry-server" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.162948 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34fc776-add8-4082-8fa8-041ac3ee8860" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.163961 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.167001 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.167246 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.167974 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.168251 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.186018 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7"] Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.271181 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.271262 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.271343 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxhx\" (UniqueName: \"kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.374268 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.374344 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.374389 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxhx\" (UniqueName: \"kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.378443 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.378518 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.392987 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxhx\" (UniqueName: \"kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:13 crc kubenswrapper[4741]: I0226 08:53:13.487776 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:14 crc kubenswrapper[4741]: I0226 08:53:14.084389 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:53:14 crc kubenswrapper[4741]: I0226 08:53:14.093815 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7"] Feb 26 08:53:15 crc kubenswrapper[4741]: I0226 08:53:15.078899 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" event={"ID":"1d5750c0-9314-4e3c-9711-c4d11fba6b84","Type":"ContainerStarted","Data":"2b4c4d031d596105d20b108b0032bbcf1b16c17aa1518e5c783df2ec540e3d47"} Feb 26 08:53:15 crc kubenswrapper[4741]: I0226 08:53:15.079525 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" event={"ID":"1d5750c0-9314-4e3c-9711-c4d11fba6b84","Type":"ContainerStarted","Data":"9d5ac50bed0dbae470c946b0cb8aca2d47027d0c203e731abb6b0c7f34267225"} Feb 26 08:53:15 crc kubenswrapper[4741]: I0226 08:53:15.108899 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" podStartSLOduration=1.49718542 podStartE2EDuration="2.10887619s" podCreationTimestamp="2026-02-26 08:53:13 +0000 UTC" firstStartedPulling="2026-02-26 08:53:14.084181005 +0000 UTC m=+2429.080118392" lastFinishedPulling="2026-02-26 08:53:14.695871775 +0000 UTC m=+2429.691809162" observedRunningTime="2026-02-26 08:53:15.102842839 +0000 UTC m=+2430.098780226" watchObservedRunningTime="2026-02-26 08:53:15.10887619 +0000 UTC m=+2430.104813567" Feb 26 08:53:16 crc kubenswrapper[4741]: I0226 08:53:16.787926 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:53:16 crc kubenswrapper[4741]: E0226 08:53:16.788415 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:53:20 crc kubenswrapper[4741]: I0226 08:53:20.167765 4741 generic.go:334] "Generic (PLEG): container finished" podID="1d5750c0-9314-4e3c-9711-c4d11fba6b84" containerID="2b4c4d031d596105d20b108b0032bbcf1b16c17aa1518e5c783df2ec540e3d47" exitCode=0 Feb 26 08:53:20 crc kubenswrapper[4741]: I0226 08:53:20.167864 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" event={"ID":"1d5750c0-9314-4e3c-9711-c4d11fba6b84","Type":"ContainerDied","Data":"2b4c4d031d596105d20b108b0032bbcf1b16c17aa1518e5c783df2ec540e3d47"} Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.678355 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.857678 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam\") pod \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.858232 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory\") pod \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.858305 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwxhx\" (UniqueName: \"kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx\") pod \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\" (UID: \"1d5750c0-9314-4e3c-9711-c4d11fba6b84\") " Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.864976 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx" (OuterVolumeSpecName: "kube-api-access-jwxhx") pod "1d5750c0-9314-4e3c-9711-c4d11fba6b84" (UID: "1d5750c0-9314-4e3c-9711-c4d11fba6b84"). InnerVolumeSpecName "kube-api-access-jwxhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.899838 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory" (OuterVolumeSpecName: "inventory") pod "1d5750c0-9314-4e3c-9711-c4d11fba6b84" (UID: "1d5750c0-9314-4e3c-9711-c4d11fba6b84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.911283 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d5750c0-9314-4e3c-9711-c4d11fba6b84" (UID: "1d5750c0-9314-4e3c-9711-c4d11fba6b84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.962319 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.962638 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwxhx\" (UniqueName: \"kubernetes.io/projected/1d5750c0-9314-4e3c-9711-c4d11fba6b84-kube-api-access-jwxhx\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:21 crc kubenswrapper[4741]: I0226 08:53:21.962767 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d5750c0-9314-4e3c-9711-c4d11fba6b84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.193287 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" event={"ID":"1d5750c0-9314-4e3c-9711-c4d11fba6b84","Type":"ContainerDied","Data":"9d5ac50bed0dbae470c946b0cb8aca2d47027d0c203e731abb6b0c7f34267225"} Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.193344 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5ac50bed0dbae470c946b0cb8aca2d47027d0c203e731abb6b0c7f34267225" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.193382 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.354465 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9"] Feb 26 08:53:22 crc kubenswrapper[4741]: E0226 08:53:22.355428 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5750c0-9314-4e3c-9711-c4d11fba6b84" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.355459 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5750c0-9314-4e3c-9711-c4d11fba6b84" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.355745 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5750c0-9314-4e3c-9711-c4d11fba6b84" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.356791 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.359946 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.360448 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.360703 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.361224 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.373275 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9"] Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.476327 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtg8\" (UniqueName: \"kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.476437 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.476531 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.580241 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtg8\" (UniqueName: \"kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.580355 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.580439 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.585704 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.587523 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.599723 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtg8\" (UniqueName: \"kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z69z9\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:22 crc kubenswrapper[4741]: I0226 08:53:22.679142 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.073184 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-sd6r4"] Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.096170 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-sd6r4"] Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.121041 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vstzs"] Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.137483 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vstzs"] Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.393841 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9"] Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.803622 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a4a741-4c8b-49fc-8f2d-499d5aa61f78" path="/var/lib/kubelet/pods/a8a4a741-4c8b-49fc-8f2d-499d5aa61f78/volumes" Feb 26 08:53:23 crc kubenswrapper[4741]: I0226 08:53:23.805941 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09ad43d-1fea-4335-97aa-5428b9be77dd" path="/var/lib/kubelet/pods/f09ad43d-1fea-4335-97aa-5428b9be77dd/volumes" Feb 26 08:53:24 crc kubenswrapper[4741]: I0226 08:53:24.226220 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" event={"ID":"78195bed-8ed3-456b-aad9-27eca93ebb64","Type":"ContainerStarted","Data":"b54a4a11c1989272940ffd796ef9da0c03315689fb0dfc1f2475753f4bd7855f"} Feb 26 08:53:24 crc kubenswrapper[4741]: I0226 08:53:24.226582 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" event={"ID":"78195bed-8ed3-456b-aad9-27eca93ebb64","Type":"ContainerStarted","Data":"9c2c80ff913bab43af3f2b8c3f25da1d150a20dfe379679a49e3f03924ac5baa"} Feb 26 08:53:24 crc kubenswrapper[4741]: I0226 08:53:24.258151 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" podStartSLOduration=1.763780302 podStartE2EDuration="2.258110292s" podCreationTimestamp="2026-02-26 08:53:22 +0000 UTC" firstStartedPulling="2026-02-26 08:53:23.399731461 +0000 UTC m=+2438.395668848" lastFinishedPulling="2026-02-26 08:53:23.894061451 +0000 UTC m=+2438.889998838" observedRunningTime="2026-02-26 08:53:24.248090127 +0000 UTC m=+2439.244027534" watchObservedRunningTime="2026-02-26 08:53:24.258110292 +0000 UTC m=+2439.254047679" Feb 26 08:53:28 crc kubenswrapper[4741]: I0226 08:53:28.788461 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:53:28 crc kubenswrapper[4741]: E0226 08:53:28.789286 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:53:39 crc kubenswrapper[4741]: I0226 08:53:39.787964 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:53:39 crc kubenswrapper[4741]: E0226 08:53:39.788955 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:53:50 crc kubenswrapper[4741]: I0226 08:53:50.787986 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:53:50 crc kubenswrapper[4741]: E0226 08:53:50.788835 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.175877 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534934-nscjz"] Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.180084 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.184385 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.184684 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.184921 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.199367 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534934-nscjz"] Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.285507 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx2bf\" (UniqueName: \"kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf\") pod \"auto-csr-approver-29534934-nscjz\" (UID: \"507b75c8-121a-4d89-95a2-f2a480783291\") " pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.388874 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx2bf\" (UniqueName: \"kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf\") pod \"auto-csr-approver-29534934-nscjz\" (UID: \"507b75c8-121a-4d89-95a2-f2a480783291\") " pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.409678 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx2bf\" (UniqueName: \"kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf\") pod \"auto-csr-approver-29534934-nscjz\" (UID: \"507b75c8-121a-4d89-95a2-f2a480783291\") " pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.514290 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.920087 4741 scope.go:117] "RemoveContainer" containerID="91d6c32f01f092463fa2ca40792b8c9692da94955b74b493c2cbf6f85e1985f8" Feb 26 08:54:00 crc kubenswrapper[4741]: I0226 08:54:00.953888 4741 scope.go:117] "RemoveContainer" containerID="6835d6f1676b747eae4836ceccf97dcf45c19870031b5c5d2b045beb3f9acd1f" Feb 26 08:54:01 crc kubenswrapper[4741]: I0226 08:54:01.059622 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534934-nscjz"] Feb 26 08:54:01 crc kubenswrapper[4741]: I0226 08:54:01.739060 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534934-nscjz" event={"ID":"507b75c8-121a-4d89-95a2-f2a480783291","Type":"ContainerStarted","Data":"075765e33e4574f5802059dc5ca81153f406e04dfef685d747e85211c036b19c"} Feb 26 08:54:01 crc kubenswrapper[4741]: I0226 08:54:01.741238 4741 generic.go:334] "Generic (PLEG): container finished" podID="78195bed-8ed3-456b-aad9-27eca93ebb64" containerID="b54a4a11c1989272940ffd796ef9da0c03315689fb0dfc1f2475753f4bd7855f" exitCode=0 Feb 26 08:54:01 crc kubenswrapper[4741]: I0226 08:54:01.741305 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" event={"ID":"78195bed-8ed3-456b-aad9-27eca93ebb64","Type":"ContainerDied","Data":"b54a4a11c1989272940ffd796ef9da0c03315689fb0dfc1f2475753f4bd7855f"} Feb 26 08:54:02 crc kubenswrapper[4741]: I0226 08:54:02.760620 4741 generic.go:334] "Generic (PLEG): container finished" podID="507b75c8-121a-4d89-95a2-f2a480783291" containerID="8dcadd91ed6e0ee7830d93cee38d9964c0589c125872c2630d562e914dc072e3" exitCode=0 Feb 26 08:54:02 crc kubenswrapper[4741]: I0226 08:54:02.760881 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534934-nscjz" event={"ID":"507b75c8-121a-4d89-95a2-f2a480783291","Type":"ContainerDied","Data":"8dcadd91ed6e0ee7830d93cee38d9964c0589c125872c2630d562e914dc072e3"} Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.372468 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.509199 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtg8\" (UniqueName: \"kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8\") pod \"78195bed-8ed3-456b-aad9-27eca93ebb64\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.509880 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory\") pod \"78195bed-8ed3-456b-aad9-27eca93ebb64\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.510154 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam\") pod \"78195bed-8ed3-456b-aad9-27eca93ebb64\" (UID: \"78195bed-8ed3-456b-aad9-27eca93ebb64\") " Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.516540 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8" (OuterVolumeSpecName: "kube-api-access-6rtg8") pod "78195bed-8ed3-456b-aad9-27eca93ebb64" (UID: "78195bed-8ed3-456b-aad9-27eca93ebb64"). InnerVolumeSpecName "kube-api-access-6rtg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.552635 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory" (OuterVolumeSpecName: "inventory") pod "78195bed-8ed3-456b-aad9-27eca93ebb64" (UID: "78195bed-8ed3-456b-aad9-27eca93ebb64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.573336 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78195bed-8ed3-456b-aad9-27eca93ebb64" (UID: "78195bed-8ed3-456b-aad9-27eca93ebb64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.614088 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.614151 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78195bed-8ed3-456b-aad9-27eca93ebb64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.614167 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtg8\" (UniqueName: \"kubernetes.io/projected/78195bed-8ed3-456b-aad9-27eca93ebb64-kube-api-access-6rtg8\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.778080 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.778216 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z69z9" event={"ID":"78195bed-8ed3-456b-aad9-27eca93ebb64","Type":"ContainerDied","Data":"9c2c80ff913bab43af3f2b8c3f25da1d150a20dfe379679a49e3f03924ac5baa"} Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.778262 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2c80ff913bab43af3f2b8c3f25da1d150a20dfe379679a49e3f03924ac5baa" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.910191 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n"] Feb 26 08:54:03 crc kubenswrapper[4741]: E0226 08:54:03.911014 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78195bed-8ed3-456b-aad9-27eca93ebb64" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.911037 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="78195bed-8ed3-456b-aad9-27eca93ebb64" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.911306 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="78195bed-8ed3-456b-aad9-27eca93ebb64" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.912508 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.921640 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.921991 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.932203 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:54:03 crc kubenswrapper[4741]: I0226 08:54:03.932355 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.014385 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n"] Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.033433 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.033574 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.033672 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfd6m\" (UniqueName: \"kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.136101 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.136298 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfd6m\" (UniqueName: \"kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.136384 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.175092 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfd6m\" (UniqueName: \"kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.198661 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.217151 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-st96n\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.266563 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.411534 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.556176 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx2bf\" (UniqueName: \"kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf\") pod \"507b75c8-121a-4d89-95a2-f2a480783291\" (UID: \"507b75c8-121a-4d89-95a2-f2a480783291\") " Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.562470 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf" (OuterVolumeSpecName: "kube-api-access-kx2bf") pod "507b75c8-121a-4d89-95a2-f2a480783291" (UID: "507b75c8-121a-4d89-95a2-f2a480783291"). InnerVolumeSpecName "kube-api-access-kx2bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.660785 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx2bf\" (UniqueName: \"kubernetes.io/projected/507b75c8-121a-4d89-95a2-f2a480783291-kube-api-access-kx2bf\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.787641 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:54:04 crc kubenswrapper[4741]: E0226 08:54:04.788468 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.793139 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534934-nscjz" event={"ID":"507b75c8-121a-4d89-95a2-f2a480783291","Type":"ContainerDied","Data":"075765e33e4574f5802059dc5ca81153f406e04dfef685d747e85211c036b19c"} Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.793198 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075765e33e4574f5802059dc5ca81153f406e04dfef685d747e85211c036b19c" Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.793234 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534934-nscjz" Feb 26 08:54:04 crc kubenswrapper[4741]: W0226 08:54:04.883655 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcffc65f6_91a1_45b8_b723_ac972e12e9f9.slice/crio-c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2 WatchSource:0}: Error finding container c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2: Status 404 returned error can't find the container with id c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2 Feb 26 08:54:04 crc kubenswrapper[4741]: I0226 08:54:04.886873 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n"] Feb 26 08:54:05 crc kubenswrapper[4741]: I0226 08:54:05.512337 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534928-75cn2"] Feb 26 08:54:05 crc kubenswrapper[4741]: I0226 08:54:05.525074 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534928-75cn2"] Feb 26 08:54:05 crc kubenswrapper[4741]: I0226 08:54:05.996480 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5a8c99-3134-4171-94e9-666b9c5ca8a7" path="/var/lib/kubelet/pods/4a5a8c99-3134-4171-94e9-666b9c5ca8a7/volumes" Feb 26 08:54:05 crc kubenswrapper[4741]: I0226 08:54:05.997563 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" event={"ID":"cffc65f6-91a1-45b8-b723-ac972e12e9f9","Type":"ContainerStarted","Data":"84051507948e1568be2edc658b929b1959910903764c0680dce3064aa9e0d3f0"} Feb 26 08:54:05 crc kubenswrapper[4741]: I0226 08:54:05.997603 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" event={"ID":"cffc65f6-91a1-45b8-b723-ac972e12e9f9","Type":"ContainerStarted","Data":"c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2"} Feb 26 08:54:06 crc kubenswrapper[4741]: I0226 08:54:06.028478 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" podStartSLOduration=2.621869205 podStartE2EDuration="3.028452107s" podCreationTimestamp="2026-02-26 08:54:03 +0000 UTC" firstStartedPulling="2026-02-26 08:54:04.887599876 +0000 UTC m=+2479.883537263" lastFinishedPulling="2026-02-26 08:54:05.294182758 +0000 UTC m=+2480.290120165" observedRunningTime="2026-02-26 08:54:06.024215746 +0000 UTC m=+2481.020153143" watchObservedRunningTime="2026-02-26 08:54:06.028452107 +0000 UTC m=+2481.024389494" Feb 26 08:54:08 crc kubenswrapper[4741]: I0226 08:54:08.070215 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gbd2f"] Feb 26 08:54:08 crc kubenswrapper[4741]: I0226 08:54:08.100635 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gbd2f"] Feb 26 08:54:09 crc kubenswrapper[4741]: I0226 08:54:09.806640 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60020180-cead-4bfa-bd7c-7637b12f274c" path="/var/lib/kubelet/pods/60020180-cead-4bfa-bd7c-7637b12f274c/volumes" Feb 26 08:54:19 crc kubenswrapper[4741]: I0226 08:54:19.788792 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:54:19 crc kubenswrapper[4741]: E0226 08:54:19.790020 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:54:30 crc kubenswrapper[4741]: I0226 08:54:30.788811 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:54:30 crc kubenswrapper[4741]: E0226 08:54:30.789882 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:54:42 crc kubenswrapper[4741]: I0226 08:54:42.788949 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:54:42 crc kubenswrapper[4741]: E0226 08:54:42.790904 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:54:52 crc kubenswrapper[4741]: I0226 08:54:52.668068 4741 generic.go:334] "Generic (PLEG): container finished" podID="cffc65f6-91a1-45b8-b723-ac972e12e9f9" containerID="84051507948e1568be2edc658b929b1959910903764c0680dce3064aa9e0d3f0" exitCode=0 Feb 26 08:54:52 crc kubenswrapper[4741]: I0226 08:54:52.668212 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" event={"ID":"cffc65f6-91a1-45b8-b723-ac972e12e9f9","Type":"ContainerDied","Data":"84051507948e1568be2edc658b929b1959910903764c0680dce3064aa9e0d3f0"} Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.177926 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.237098 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory\") pod \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.237235 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfd6m\" (UniqueName: \"kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m\") pod \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.237430 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam\") pod \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\" (UID: \"cffc65f6-91a1-45b8-b723-ac972e12e9f9\") " Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.243023 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m" (OuterVolumeSpecName: "kube-api-access-dfd6m") pod "cffc65f6-91a1-45b8-b723-ac972e12e9f9" (UID: "cffc65f6-91a1-45b8-b723-ac972e12e9f9"). InnerVolumeSpecName "kube-api-access-dfd6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.272647 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory" (OuterVolumeSpecName: "inventory") pod "cffc65f6-91a1-45b8-b723-ac972e12e9f9" (UID: "cffc65f6-91a1-45b8-b723-ac972e12e9f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.273927 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cffc65f6-91a1-45b8-b723-ac972e12e9f9" (UID: "cffc65f6-91a1-45b8-b723-ac972e12e9f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.342546 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.342606 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfd6m\" (UniqueName: \"kubernetes.io/projected/cffc65f6-91a1-45b8-b723-ac972e12e9f9-kube-api-access-dfd6m\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.342625 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cffc65f6-91a1-45b8-b723-ac972e12e9f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.692705 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" event={"ID":"cffc65f6-91a1-45b8-b723-ac972e12e9f9","Type":"ContainerDied","Data":"c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2"} Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.692765 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2cc7edf45bbec1e586d509ec5605c37c0ae5c6497a814db8846bf47f5d2e5c2" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.693183 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-st96n" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.815628 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-srzsc"] Feb 26 08:54:54 crc kubenswrapper[4741]: E0226 08:54:54.822142 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507b75c8-121a-4d89-95a2-f2a480783291" containerName="oc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.822187 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="507b75c8-121a-4d89-95a2-f2a480783291" containerName="oc" Feb 26 08:54:54 crc kubenswrapper[4741]: E0226 08:54:54.822213 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffc65f6-91a1-45b8-b723-ac972e12e9f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.822227 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffc65f6-91a1-45b8-b723-ac972e12e9f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.822558 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffc65f6-91a1-45b8-b723-ac972e12e9f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.822581 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="507b75c8-121a-4d89-95a2-f2a480783291" containerName="oc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.823740 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.826324 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.827198 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.827455 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.829761 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.835304 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-srzsc"] Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.869350 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.869632 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh59z\" (UniqueName: \"kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.870275 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.974561 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.974729 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.974786 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh59z\" (UniqueName: \"kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.980493 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.982325 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:54 crc kubenswrapper[4741]: I0226 08:54:54.993976 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh59z\" (UniqueName: \"kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z\") pod \"ssh-known-hosts-edpm-deployment-srzsc\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:55 crc kubenswrapper[4741]: I0226 08:54:55.147085 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:54:55 crc kubenswrapper[4741]: I0226 08:54:55.742165 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-srzsc"] Feb 26 08:54:56 crc kubenswrapper[4741]: I0226 08:54:56.721130 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" event={"ID":"3830734c-a696-440b-80fc-b2b3e1d29cf4","Type":"ContainerStarted","Data":"349955b35a72c1d4fcda6b931ae77aa4a5adf2c48d7ec92f549b1624b1373ebe"} Feb 26 08:54:56 crc kubenswrapper[4741]: I0226 08:54:56.721526 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" event={"ID":"3830734c-a696-440b-80fc-b2b3e1d29cf4","Type":"ContainerStarted","Data":"667befe4e2cecf14935b936c6cf4a1dfd7440791778144f0e89ea7f6583435c4"} Feb 26 08:54:56 crc kubenswrapper[4741]: I0226 08:54:56.748034 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" podStartSLOduration=2.295653701 podStartE2EDuration="2.748009016s" podCreationTimestamp="2026-02-26 08:54:54 +0000 UTC" firstStartedPulling="2026-02-26 08:54:55.743934779 +0000 UTC m=+2530.739872166" lastFinishedPulling="2026-02-26 08:54:56.196290094 +0000 UTC m=+2531.192227481" observedRunningTime="2026-02-26 08:54:56.741266264 +0000 UTC m=+2531.737203671" watchObservedRunningTime="2026-02-26 08:54:56.748009016 +0000 UTC m=+2531.743946403" Feb 26 08:54:57 crc kubenswrapper[4741]: I0226 08:54:57.787743 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:54:57 crc kubenswrapper[4741]: E0226 08:54:57.788375 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:55:01 crc kubenswrapper[4741]: I0226 08:55:01.148760 4741 scope.go:117] "RemoveContainer" containerID="341aa065ea73c9d6ed89e30521984da4287e2fe7f85601afe6b583b013713dcf" Feb 26 08:55:01 crc kubenswrapper[4741]: I0226 08:55:01.203735 4741 scope.go:117] "RemoveContainer" containerID="68bd1024267507e2eaae7b2ae93d53e7c88a93cd775721defce9be561b65351c" Feb 26 08:55:03 crc kubenswrapper[4741]: I0226 08:55:03.810625 4741 generic.go:334] "Generic (PLEG): container finished" podID="3830734c-a696-440b-80fc-b2b3e1d29cf4" containerID="349955b35a72c1d4fcda6b931ae77aa4a5adf2c48d7ec92f549b1624b1373ebe" exitCode=0 Feb 26 08:55:03 crc kubenswrapper[4741]: I0226 08:55:03.810756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" event={"ID":"3830734c-a696-440b-80fc-b2b3e1d29cf4","Type":"ContainerDied","Data":"349955b35a72c1d4fcda6b931ae77aa4a5adf2c48d7ec92f549b1624b1373ebe"} Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.321182 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.466996 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam\") pod \"3830734c-a696-440b-80fc-b2b3e1d29cf4\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.467671 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh59z\" (UniqueName: \"kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z\") pod \"3830734c-a696-440b-80fc-b2b3e1d29cf4\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.467841 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0\") pod \"3830734c-a696-440b-80fc-b2b3e1d29cf4\" (UID: \"3830734c-a696-440b-80fc-b2b3e1d29cf4\") " Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.474195 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z" (OuterVolumeSpecName: "kube-api-access-lh59z") pod "3830734c-a696-440b-80fc-b2b3e1d29cf4" (UID: "3830734c-a696-440b-80fc-b2b3e1d29cf4"). InnerVolumeSpecName "kube-api-access-lh59z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.511431 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3830734c-a696-440b-80fc-b2b3e1d29cf4" (UID: "3830734c-a696-440b-80fc-b2b3e1d29cf4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.511988 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3830734c-a696-440b-80fc-b2b3e1d29cf4" (UID: "3830734c-a696-440b-80fc-b2b3e1d29cf4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.571506 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh59z\" (UniqueName: \"kubernetes.io/projected/3830734c-a696-440b-80fc-b2b3e1d29cf4-kube-api-access-lh59z\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.571549 4741 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.571561 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3830734c-a696-440b-80fc-b2b3e1d29cf4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.842382 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" event={"ID":"3830734c-a696-440b-80fc-b2b3e1d29cf4","Type":"ContainerDied","Data":"667befe4e2cecf14935b936c6cf4a1dfd7440791778144f0e89ea7f6583435c4"} Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.842790 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667befe4e2cecf14935b936c6cf4a1dfd7440791778144f0e89ea7f6583435c4" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.842475 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-srzsc" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.948307 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28"] Feb 26 08:55:05 crc kubenswrapper[4741]: E0226 08:55:05.949729 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3830734c-a696-440b-80fc-b2b3e1d29cf4" containerName="ssh-known-hosts-edpm-deployment" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.949812 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3830734c-a696-440b-80fc-b2b3e1d29cf4" containerName="ssh-known-hosts-edpm-deployment" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.950372 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3830734c-a696-440b-80fc-b2b3e1d29cf4" containerName="ssh-known-hosts-edpm-deployment" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.951645 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.955566 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.955706 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.955779 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.958576 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:55:05 crc kubenswrapper[4741]: I0226 08:55:05.962783 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28"] Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.086755 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.087189 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwsc\" (UniqueName: \"kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.087357 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.191153 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.191701 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.191768 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwsc\" (UniqueName: \"kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.198541 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.210805 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.228031 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwsc\" (UniqueName: \"kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-drt28\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.276070 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:06 crc kubenswrapper[4741]: I0226 08:55:06.972859 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28"] Feb 26 08:55:07 crc kubenswrapper[4741]: I0226 08:55:07.889120 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" event={"ID":"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a","Type":"ContainerStarted","Data":"dab3f3239fe69a400ada475e037b9cf196c3a201099d8678a7278d904c894954"} Feb 26 08:55:07 crc kubenswrapper[4741]: I0226 08:55:07.889445 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" event={"ID":"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a","Type":"ContainerStarted","Data":"91a11742cc7666a1927d62dee0ea42612ba906c25281f56f3353d3e07143a0a8"} Feb 26 08:55:07 crc kubenswrapper[4741]: I0226 08:55:07.914902 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" podStartSLOduration=2.5121694530000003 podStartE2EDuration="2.914866884s" podCreationTimestamp="2026-02-26 08:55:05 +0000 UTC" firstStartedPulling="2026-02-26 08:55:06.98329388 +0000 UTC m=+2541.979231267" lastFinishedPulling="2026-02-26 08:55:07.385991271 +0000 UTC m=+2542.381928698" observedRunningTime="2026-02-26 08:55:07.90807689 +0000 UTC m=+2542.904014297" watchObservedRunningTime="2026-02-26 08:55:07.914866884 +0000 UTC m=+2542.910804281" Feb 26 08:55:09 crc kubenswrapper[4741]: I0226 08:55:09.788819 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:55:09 crc kubenswrapper[4741]: E0226 08:55:09.789613 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:55:15 crc kubenswrapper[4741]: I0226 08:55:15.997998 4741 generic.go:334] "Generic (PLEG): container finished" podID="bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" containerID="dab3f3239fe69a400ada475e037b9cf196c3a201099d8678a7278d904c894954" exitCode=0 Feb 26 08:55:15 crc kubenswrapper[4741]: I0226 08:55:15.998094 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" event={"ID":"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a","Type":"ContainerDied","Data":"dab3f3239fe69a400ada475e037b9cf196c3a201099d8678a7278d904c894954"} Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.492629 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.559628 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam\") pod \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.560228 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory\") pod \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.560360 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqwsc\" (UniqueName: \"kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc\") pod \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\" (UID: \"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a\") " Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.568080 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc" (OuterVolumeSpecName: "kube-api-access-gqwsc") pod "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" (UID: "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a"). InnerVolumeSpecName "kube-api-access-gqwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.610062 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" (UID: "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.610399 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory" (OuterVolumeSpecName: "inventory") pod "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" (UID: "bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.664450 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.664514 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:17 crc kubenswrapper[4741]: I0226 08:55:17.664530 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqwsc\" (UniqueName: \"kubernetes.io/projected/bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a-kube-api-access-gqwsc\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.021756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" event={"ID":"bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a","Type":"ContainerDied","Data":"91a11742cc7666a1927d62dee0ea42612ba906c25281f56f3353d3e07143a0a8"} Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.021811 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a11742cc7666a1927d62dee0ea42612ba906c25281f56f3353d3e07143a0a8" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.021877 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-drt28" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.129774 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx"] Feb 26 08:55:18 crc kubenswrapper[4741]: E0226 08:55:18.130633 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.130651 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.130941 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.132282 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.136141 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.136295 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.136686 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.140380 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.148348 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx"] Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.185842 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqzc\" (UniqueName: \"kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.185966 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.186012 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.289148 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqzc\" (UniqueName: \"kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.289248 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.289286 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.294355 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.294355 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.313067 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqzc\" (UniqueName: \"kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:18 crc kubenswrapper[4741]: I0226 08:55:18.455298 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:19 crc kubenswrapper[4741]: I0226 08:55:19.072736 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx"] Feb 26 08:55:20 crc kubenswrapper[4741]: I0226 08:55:20.047836 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" event={"ID":"815f01fb-9d09-4745-836f-e4fd93594bb3","Type":"ContainerStarted","Data":"55306cc11c1711d28e985c3c6bab7d5f1a5435cef4db349419694407e8a858d3"} Feb 26 08:55:20 crc kubenswrapper[4741]: I0226 08:55:20.048449 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" event={"ID":"815f01fb-9d09-4745-836f-e4fd93594bb3","Type":"ContainerStarted","Data":"ed8426613e2068e7d56fe33e6e8133aa2e8cbd74f9d316484abf78e6de123d27"} Feb 26 08:55:20 crc kubenswrapper[4741]: I0226 08:55:20.068465 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" podStartSLOduration=1.447959885 podStartE2EDuration="2.068442726s" podCreationTimestamp="2026-02-26 08:55:18 +0000 UTC" firstStartedPulling="2026-02-26 08:55:19.075478643 +0000 UTC m=+2554.071416030" lastFinishedPulling="2026-02-26 08:55:19.695961494 +0000 UTC m=+2554.691898871" observedRunningTime="2026-02-26 08:55:20.065170222 +0000 UTC m=+2555.061107639" watchObservedRunningTime="2026-02-26 08:55:20.068442726 +0000 UTC m=+2555.064380123" Feb 26 08:55:21 crc kubenswrapper[4741]: I0226 08:55:21.788730 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:55:21 crc kubenswrapper[4741]: E0226 08:55:21.789669 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:55:27 crc kubenswrapper[4741]: I0226 08:55:27.073262 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-hng7c"] Feb 26 08:55:27 crc kubenswrapper[4741]: I0226 08:55:27.088176 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-hng7c"] Feb 26 08:55:27 crc kubenswrapper[4741]: I0226 08:55:27.808677 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9aa4f4-8a5b-4b08-9708-a83062f9bec3" path="/var/lib/kubelet/pods/ee9aa4f4-8a5b-4b08-9708-a83062f9bec3/volumes" Feb 26 08:55:30 crc kubenswrapper[4741]: I0226 08:55:30.178539 4741 generic.go:334] "Generic (PLEG): container finished" podID="815f01fb-9d09-4745-836f-e4fd93594bb3" containerID="55306cc11c1711d28e985c3c6bab7d5f1a5435cef4db349419694407e8a858d3" exitCode=0 Feb 26 08:55:30 crc kubenswrapper[4741]: I0226 08:55:30.178615 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" event={"ID":"815f01fb-9d09-4745-836f-e4fd93594bb3","Type":"ContainerDied","Data":"55306cc11c1711d28e985c3c6bab7d5f1a5435cef4db349419694407e8a858d3"} Feb 26 08:55:31 crc kubenswrapper[4741]: I0226 08:55:31.814678 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:31 crc kubenswrapper[4741]: I0226 08:55:31.978779 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjqzc\" (UniqueName: \"kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc\") pod \"815f01fb-9d09-4745-836f-e4fd93594bb3\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " Feb 26 08:55:31 crc kubenswrapper[4741]: I0226 08:55:31.979355 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam\") pod \"815f01fb-9d09-4745-836f-e4fd93594bb3\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " Feb 26 08:55:31 crc kubenswrapper[4741]: I0226 08:55:31.979504 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory\") pod \"815f01fb-9d09-4745-836f-e4fd93594bb3\" (UID: \"815f01fb-9d09-4745-836f-e4fd93594bb3\") " Feb 26 08:55:31 crc kubenswrapper[4741]: I0226 08:55:31.994358 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc" (OuterVolumeSpecName: "kube-api-access-bjqzc") pod "815f01fb-9d09-4745-836f-e4fd93594bb3" (UID: "815f01fb-9d09-4745-836f-e4fd93594bb3"). InnerVolumeSpecName "kube-api-access-bjqzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.039353 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "815f01fb-9d09-4745-836f-e4fd93594bb3" (UID: "815f01fb-9d09-4745-836f-e4fd93594bb3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.061278 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory" (OuterVolumeSpecName: "inventory") pod "815f01fb-9d09-4745-836f-e4fd93594bb3" (UID: "815f01fb-9d09-4745-836f-e4fd93594bb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.085538 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.085575 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815f01fb-9d09-4745-836f-e4fd93594bb3-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.085586 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjqzc\" (UniqueName: \"kubernetes.io/projected/815f01fb-9d09-4745-836f-e4fd93594bb3-kube-api-access-bjqzc\") on node \"crc\" DevicePath \"\"" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.210320 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" event={"ID":"815f01fb-9d09-4745-836f-e4fd93594bb3","Type":"ContainerDied","Data":"ed8426613e2068e7d56fe33e6e8133aa2e8cbd74f9d316484abf78e6de123d27"} Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.210375 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8426613e2068e7d56fe33e6e8133aa2e8cbd74f9d316484abf78e6de123d27" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.210401 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.354158 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9"] Feb 26 08:55:32 crc kubenswrapper[4741]: E0226 08:55:32.354898 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815f01fb-9d09-4745-836f-e4fd93594bb3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.354922 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="815f01fb-9d09-4745-836f-e4fd93594bb3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.355237 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="815f01fb-9d09-4745-836f-e4fd93594bb3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.356351 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360260 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360272 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360334 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360517 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360686 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.360728 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.366700 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.367181 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.367481 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.382619 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9"] Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.496797 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.496858 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.496894 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.497195 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.497711 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.497983 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498216 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498452 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498615 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498659 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498725 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498764 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498791 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.498891 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.499150 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.499290 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtldp\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.602944 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtldp\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.603469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.603721 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.603936 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.604318 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.605584 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.605827 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.606068 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.606356 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.606606 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.606833 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.607052 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.607257 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.607413 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.607596 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.607885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.611144 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.611388 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.611659 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.611940 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.613358 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.613525 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.613794 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.615450 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.615846 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.616208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.617552 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.617613 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.618814 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.624341 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.629706 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtldp\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.641197 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.676876 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:55:32 crc kubenswrapper[4741]: I0226 08:55:32.794423 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:55:32 crc kubenswrapper[4741]: E0226 08:55:32.795212 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:55:33 crc kubenswrapper[4741]: I0226 08:55:33.319567 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9"] Feb 26 08:55:34 crc kubenswrapper[4741]: I0226 08:55:34.235386 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" event={"ID":"34e96b16-fd87-4660-bbdb-8e62046ab2ce","Type":"ContainerStarted","Data":"6b3aaaca44cbc38ed4141ec5c9d451fb57d6f35a0a859a55365c120a8ebd19cb"} Feb 26 08:55:35 crc kubenswrapper[4741]: I0226 08:55:35.251831 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" event={"ID":"34e96b16-fd87-4660-bbdb-8e62046ab2ce","Type":"ContainerStarted","Data":"9386ec4603b55aac37fefb0bba432c8040e620afbbd2aefeafb0f23bb44ad4fb"} Feb 26 08:55:35 crc kubenswrapper[4741]: I0226 08:55:35.278914 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" podStartSLOduration=2.509203943 podStartE2EDuration="3.278875859s" podCreationTimestamp="2026-02-26 08:55:32 +0000 UTC" firstStartedPulling="2026-02-26 08:55:33.326771199 +0000 UTC m=+2568.322708586" lastFinishedPulling="2026-02-26 08:55:34.096443115 +0000 UTC m=+2569.092380502" observedRunningTime="2026-02-26 08:55:35.270799559 +0000 UTC m=+2570.266736946" watchObservedRunningTime="2026-02-26 08:55:35.278875859 +0000 UTC m=+2570.274813246" Feb 26 08:55:43 crc kubenswrapper[4741]: I0226 08:55:43.788790 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:55:43 crc kubenswrapper[4741]: E0226 08:55:43.789690 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:55:54 crc kubenswrapper[4741]: I0226 08:55:54.788988 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:55:54 crc kubenswrapper[4741]: E0226 08:55:54.790852 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.176412 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534936-bcjv2"] Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.187328 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534936-bcjv2"] Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.187480 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.190833 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.191087 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.191834 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.333026 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vpg\" (UniqueName: \"kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg\") pod \"auto-csr-approver-29534936-bcjv2\" (UID: \"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e\") " pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.436870 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vpg\" (UniqueName: \"kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg\") pod \"auto-csr-approver-29534936-bcjv2\" (UID: \"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e\") " pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.464354 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vpg\" (UniqueName: \"kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg\") pod \"auto-csr-approver-29534936-bcjv2\" (UID: \"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e\") " pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:00 crc kubenswrapper[4741]: I0226 08:56:00.511077 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:01 crc kubenswrapper[4741]: I0226 08:56:01.107443 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534936-bcjv2"] Feb 26 08:56:01 crc kubenswrapper[4741]: I0226 08:56:01.357962 4741 scope.go:117] "RemoveContainer" containerID="81ba6762658d3d935a5f05c81677b2a119c163ba2c38a112a52ff6c911fa97ab" Feb 26 08:56:01 crc kubenswrapper[4741]: I0226 08:56:01.620890 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" event={"ID":"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e","Type":"ContainerStarted","Data":"61b603b71c1dcff54425f6dde73d80b91d012b1a8be3faf2268a76283bf96385"} Feb 26 08:56:02 crc kubenswrapper[4741]: I0226 08:56:02.637826 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" event={"ID":"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e","Type":"ContainerStarted","Data":"99fce8418122f68fa5d34db4c0f0de0c7fa60dac67a38683928f1b079ecc2cea"} Feb 26 08:56:02 crc kubenswrapper[4741]: I0226 08:56:02.680147 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" podStartSLOduration=1.86453764 podStartE2EDuration="2.680091342s" podCreationTimestamp="2026-02-26 08:56:00 +0000 UTC" firstStartedPulling="2026-02-26 08:56:01.115282075 +0000 UTC m=+2596.111219452" lastFinishedPulling="2026-02-26 08:56:01.930835767 +0000 UTC m=+2596.926773154" observedRunningTime="2026-02-26 08:56:02.659529997 +0000 UTC m=+2597.655467394" watchObservedRunningTime="2026-02-26 08:56:02.680091342 +0000 UTC m=+2597.676028739" Feb 26 08:56:03 crc kubenswrapper[4741]: I0226 08:56:03.672724 4741 generic.go:334] "Generic (PLEG): container finished" podID="4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" containerID="99fce8418122f68fa5d34db4c0f0de0c7fa60dac67a38683928f1b079ecc2cea" exitCode=0 Feb 26 08:56:03 crc kubenswrapper[4741]: I0226 08:56:03.673097 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" event={"ID":"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e","Type":"ContainerDied","Data":"99fce8418122f68fa5d34db4c0f0de0c7fa60dac67a38683928f1b079ecc2cea"} Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.141702 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.316815 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vpg\" (UniqueName: \"kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg\") pod \"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e\" (UID: \"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e\") " Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.339457 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg" (OuterVolumeSpecName: "kube-api-access-42vpg") pod "4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" (UID: "4cc40d77-5635-4ff0-9c9f-0c5c1c02085e"). InnerVolumeSpecName "kube-api-access-42vpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.422674 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vpg\" (UniqueName: \"kubernetes.io/projected/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e-kube-api-access-42vpg\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.702625 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" event={"ID":"4cc40d77-5635-4ff0-9c9f-0c5c1c02085e","Type":"ContainerDied","Data":"61b603b71c1dcff54425f6dde73d80b91d012b1a8be3faf2268a76283bf96385"} Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.703030 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b603b71c1dcff54425f6dde73d80b91d012b1a8be3faf2268a76283bf96385" Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.702710 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534936-bcjv2" Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.783343 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534930-vwj4n"] Feb 26 08:56:05 crc kubenswrapper[4741]: I0226 08:56:05.804328 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534930-vwj4n"] Feb 26 08:56:06 crc kubenswrapper[4741]: I0226 08:56:06.789382 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:56:06 crc kubenswrapper[4741]: E0226 08:56:06.790411 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:56:07 crc kubenswrapper[4741]: I0226 08:56:07.806236 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357ed6dc-5c4b-486e-91b0-850eed492bb0" path="/var/lib/kubelet/pods/357ed6dc-5c4b-486e-91b0-850eed492bb0/volumes" Feb 26 08:56:12 crc kubenswrapper[4741]: I0226 08:56:12.035026 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pcr97"] Feb 26 08:56:12 crc kubenswrapper[4741]: I0226 08:56:12.047198 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pcr97"] Feb 26 08:56:13 crc kubenswrapper[4741]: I0226 08:56:13.802585 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c215d58-07de-43c3-b0ec-ecade20263dd" path="/var/lib/kubelet/pods/5c215d58-07de-43c3-b0ec-ecade20263dd/volumes" Feb 26 08:56:18 crc kubenswrapper[4741]: E0226 08:56:18.785973 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34e96b16_fd87_4660_bbdb_8e62046ab2ce.slice/crio-9386ec4603b55aac37fefb0bba432c8040e620afbbd2aefeafb0f23bb44ad4fb.scope\": RecentStats: unable to find data in memory cache]" Feb 26 08:56:18 crc kubenswrapper[4741]: I0226 08:56:18.791303 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:56:18 crc kubenswrapper[4741]: E0226 08:56:18.792424 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:56:18 crc kubenswrapper[4741]: I0226 08:56:18.876667 4741 generic.go:334] "Generic (PLEG): container finished" podID="34e96b16-fd87-4660-bbdb-8e62046ab2ce" containerID="9386ec4603b55aac37fefb0bba432c8040e620afbbd2aefeafb0f23bb44ad4fb" exitCode=0 Feb 26 08:56:18 crc kubenswrapper[4741]: I0226 08:56:18.876736 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" event={"ID":"34e96b16-fd87-4660-bbdb-8e62046ab2ce","Type":"ContainerDied","Data":"9386ec4603b55aac37fefb0bba432c8040e620afbbd2aefeafb0f23bb44ad4fb"} Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.455932 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565600 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565711 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565760 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565914 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565941 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.565976 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566009 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566145 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566172 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566297 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566324 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtldp\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566362 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566418 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566457 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566485 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.566520 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam\") pod \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\" (UID: \"34e96b16-fd87-4660-bbdb-8e62046ab2ce\") " Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.574812 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.579385 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.579946 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.580675 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.580753 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.580812 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.580910 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.580927 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.581174 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.581184 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.581235 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.581326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.582455 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.592361 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp" (OuterVolumeSpecName: "kube-api-access-rtldp") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "kube-api-access-rtldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.615049 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.615920 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory" (OuterVolumeSpecName: "inventory") pod "34e96b16-fd87-4660-bbdb-8e62046ab2ce" (UID: "34e96b16-fd87-4660-bbdb-8e62046ab2ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671377 4741 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671416 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtldp\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-kube-api-access-rtldp\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671430 4741 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671444 4741 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671456 4741 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671466 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671478 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671490 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671502 4741 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671511 4741 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671521 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671532 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671544 4741 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671557 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671567 4741 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e96b16-fd87-4660-bbdb-8e62046ab2ce-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.671578 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/34e96b16-fd87-4660-bbdb-8e62046ab2ce-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.902817 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" event={"ID":"34e96b16-fd87-4660-bbdb-8e62046ab2ce","Type":"ContainerDied","Data":"6b3aaaca44cbc38ed4141ec5c9d451fb57d6f35a0a859a55365c120a8ebd19cb"} Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.903382 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3aaaca44cbc38ed4141ec5c9d451fb57d6f35a0a859a55365c120a8ebd19cb" Feb 26 08:56:20 crc kubenswrapper[4741]: I0226 08:56:20.902894 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.046617 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx"] Feb 26 08:56:21 crc kubenswrapper[4741]: E0226 08:56:21.047327 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e96b16-fd87-4660-bbdb-8e62046ab2ce" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.047352 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e96b16-fd87-4660-bbdb-8e62046ab2ce" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 08:56:21 crc kubenswrapper[4741]: E0226 08:56:21.047375 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" containerName="oc" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.047383 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" containerName="oc" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.047705 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e96b16-fd87-4660-bbdb-8e62046ab2ce" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.047727 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" containerName="oc" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.048671 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.055051 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.055436 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.055658 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.055933 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.057158 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.083907 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx"] Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.188586 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.189041 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt5k\" (UniqueName: \"kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.189611 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.189804 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.189855 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.292394 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.292487 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.292596 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.292710 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt5k\" (UniqueName: \"kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.292804 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.294344 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.306048 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.306032 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.313190 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.313449 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt5k\" (UniqueName: \"kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nnjmx\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:21 crc kubenswrapper[4741]: I0226 08:56:21.390740 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:56:22 crc kubenswrapper[4741]: I0226 08:56:22.056366 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx"] Feb 26 08:56:22 crc kubenswrapper[4741]: I0226 08:56:22.928957 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" event={"ID":"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830","Type":"ContainerStarted","Data":"ac92cce1c79d0fe3586e4758ce566dba4a4cec667d0187ec903beb7e689dee72"} Feb 26 08:56:22 crc kubenswrapper[4741]: I0226 08:56:22.929358 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" event={"ID":"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830","Type":"ContainerStarted","Data":"b67d63562df6826b4e1a32afb02f932c4e70412c47e8fd1943d3e65f27bdfa97"} Feb 26 08:56:22 crc kubenswrapper[4741]: I0226 08:56:22.961438 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" podStartSLOduration=1.545637398 podStartE2EDuration="1.961412331s" podCreationTimestamp="2026-02-26 08:56:21 +0000 UTC" firstStartedPulling="2026-02-26 08:56:22.065319898 +0000 UTC m=+2617.061257285" lastFinishedPulling="2026-02-26 08:56:22.481094811 +0000 UTC m=+2617.477032218" observedRunningTime="2026-02-26 08:56:22.948261647 +0000 UTC m=+2617.944199024" watchObservedRunningTime="2026-02-26 08:56:22.961412331 +0000 UTC m=+2617.957349718" Feb 26 08:56:29 crc kubenswrapper[4741]: I0226 08:56:29.787945 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:56:29 crc kubenswrapper[4741]: E0226 08:56:29.789320 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:56:40 crc kubenswrapper[4741]: I0226 08:56:40.788567 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:56:40 crc kubenswrapper[4741]: E0226 08:56:40.789821 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:56:52 crc kubenswrapper[4741]: I0226 08:56:52.788918 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:56:52 crc kubenswrapper[4741]: E0226 08:56:52.790245 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:57:01 crc kubenswrapper[4741]: I0226 08:57:01.534831 4741 scope.go:117] "RemoveContainer" containerID="1870af5d493460dd7616bfeca83b80d4aa2bdd05c3b9b58ac98e878dd26d8885" Feb 26 08:57:01 crc kubenswrapper[4741]: I0226 08:57:01.599748 4741 scope.go:117] "RemoveContainer" containerID="80f56828da29c9874fd504816e4b5cd8d07bf891734037266025f87c8f837c7c" Feb 26 08:57:04 crc kubenswrapper[4741]: I0226 08:57:04.788681 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:57:04 crc kubenswrapper[4741]: E0226 08:57:04.789686 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:57:15 crc kubenswrapper[4741]: I0226 08:57:15.803444 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:57:15 crc kubenswrapper[4741]: E0226 08:57:15.804174 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 08:57:26 crc kubenswrapper[4741]: I0226 08:57:26.788781 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 08:57:26 crc kubenswrapper[4741]: I0226 08:57:26.853801 4741 generic.go:334] "Generic (PLEG): container finished" podID="d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" containerID="ac92cce1c79d0fe3586e4758ce566dba4a4cec667d0187ec903beb7e689dee72" exitCode=0 Feb 26 08:57:26 crc kubenswrapper[4741]: I0226 08:57:26.853908 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" event={"ID":"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830","Type":"ContainerDied","Data":"ac92cce1c79d0fe3586e4758ce566dba4a4cec667d0187ec903beb7e689dee72"} Feb 26 08:57:27 crc kubenswrapper[4741]: I0226 08:57:27.887448 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4"} Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.594119 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.669392 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam\") pod \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.669696 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle\") pod \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.669813 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory\") pod \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.669918 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0\") pod \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.669949 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqt5k\" (UniqueName: \"kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k\") pod \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\" (UID: \"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830\") " Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.701340 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" (UID: "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.720405 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k" (OuterVolumeSpecName: "kube-api-access-tqt5k") pod "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" (UID: "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830"). InnerVolumeSpecName "kube-api-access-tqt5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.772458 4741 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.772800 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqt5k\" (UniqueName: \"kubernetes.io/projected/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-kube-api-access-tqt5k\") on node \"crc\" DevicePath \"\"" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.789263 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory" (OuterVolumeSpecName: "inventory") pod "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" (UID: "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.810707 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" (UID: "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.811742 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" (UID: "d8d4bb96-ef81-4ac1-af2c-e8f63a53b830"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.874419 4741 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.874451 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.874467 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d4bb96-ef81-4ac1-af2c-e8f63a53b830-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.941009 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" event={"ID":"d8d4bb96-ef81-4ac1-af2c-e8f63a53b830","Type":"ContainerDied","Data":"b67d63562df6826b4e1a32afb02f932c4e70412c47e8fd1943d3e65f27bdfa97"} Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.941075 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67d63562df6826b4e1a32afb02f932c4e70412c47e8fd1943d3e65f27bdfa97" Feb 26 08:57:28 crc kubenswrapper[4741]: I0226 08:57:28.941178 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nnjmx" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.015643 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn"] Feb 26 08:57:29 crc kubenswrapper[4741]: E0226 08:57:29.016365 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.016385 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.016668 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d4bb96-ef81-4ac1-af2c-e8f63a53b830" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.017729 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.027849 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.028157 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.027925 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.028481 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.028642 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.028849 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.031064 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn"] Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.182174 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.182297 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.183921 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.184005 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.184054 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.184498 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djsjs\" (UniqueName: \"kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.287556 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.287693 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.287802 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.287871 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.287906 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.288016 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djsjs\" (UniqueName: \"kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.294566 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.295178 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.295226 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.295837 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.299691 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.308388 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djsjs\" (UniqueName: \"kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.339328 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.929893 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn"] Feb 26 08:57:29 crc kubenswrapper[4741]: I0226 08:57:29.953628 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" event={"ID":"34921c87-3a4a-4be3-8a8e-8cae7baf4785","Type":"ContainerStarted","Data":"f59704c5546ed2c0eb983e0291a668097281b8f19bd39b0270c54d27eb6be532"} Feb 26 08:57:30 crc kubenswrapper[4741]: I0226 08:57:30.964813 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" event={"ID":"34921c87-3a4a-4be3-8a8e-8cae7baf4785","Type":"ContainerStarted","Data":"ba569d4540731c9d9de92abee23e24ead0d66775cf4affab3b9ba74657b78bcc"} Feb 26 08:57:30 crc kubenswrapper[4741]: I0226 08:57:30.991562 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" podStartSLOduration=2.596854386 podStartE2EDuration="2.991537319s" podCreationTimestamp="2026-02-26 08:57:28 +0000 UTC" firstStartedPulling="2026-02-26 08:57:29.941980827 +0000 UTC m=+2684.937918234" lastFinishedPulling="2026-02-26 08:57:30.33666375 +0000 UTC m=+2685.332601167" observedRunningTime="2026-02-26 08:57:30.981163303 +0000 UTC m=+2685.977100700" watchObservedRunningTime="2026-02-26 08:57:30.991537319 +0000 UTC m=+2685.987474706" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.144378 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.149505 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.187741 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.255491 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.255673 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.256094 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mz67\" (UniqueName: \"kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.359619 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mz67\" (UniqueName: \"kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.359768 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.359825 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.360925 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.361043 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.394301 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mz67\" (UniqueName: \"kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67\") pod \"community-operators-xsnss\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:58 crc kubenswrapper[4741]: I0226 08:57:58.483268 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:57:59 crc kubenswrapper[4741]: I0226 08:57:59.179649 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:57:59 crc kubenswrapper[4741]: I0226 08:57:59.360784 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerStarted","Data":"9b13970ea36975bd1a5ef68d4241a051f271a248fc947432e78832202d8fab5e"} Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.185181 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534938-mkf4d"] Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.186988 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.190026 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.190505 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.190913 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.241232 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534938-mkf4d"] Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.365138 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4q5n\" (UniqueName: \"kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n\") pod \"auto-csr-approver-29534938-mkf4d\" (UID: \"990a6803-9d8a-4544-818f-0979ab2292bf\") " pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.447094 4741 generic.go:334] "Generic (PLEG): container finished" podID="ee0d6379-443d-44e8-8542-e395afe813db" containerID="a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2" exitCode=0 Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.447167 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerDied","Data":"a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2"} Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.467462 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4q5n\" (UniqueName: \"kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n\") pod \"auto-csr-approver-29534938-mkf4d\" (UID: \"990a6803-9d8a-4544-818f-0979ab2292bf\") " pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.508576 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4q5n\" (UniqueName: \"kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n\") pod \"auto-csr-approver-29534938-mkf4d\" (UID: \"990a6803-9d8a-4544-818f-0979ab2292bf\") " pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:00 crc kubenswrapper[4741]: I0226 08:58:00.806905 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:01 crc kubenswrapper[4741]: I0226 08:58:01.366383 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534938-mkf4d"] Feb 26 08:58:01 crc kubenswrapper[4741]: I0226 08:58:01.473231 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" event={"ID":"990a6803-9d8a-4544-818f-0979ab2292bf","Type":"ContainerStarted","Data":"09fc11201e32cdde063064ca1493ee2dda297460bbd0a4a9e037f1a26fec2cd8"} Feb 26 08:58:01 crc kubenswrapper[4741]: I0226 08:58:01.484813 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerStarted","Data":"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028"} Feb 26 08:58:03 crc kubenswrapper[4741]: I0226 08:58:03.519566 4741 generic.go:334] "Generic (PLEG): container finished" podID="990a6803-9d8a-4544-818f-0979ab2292bf" containerID="65d32d25ea966cfe09e86534d1f6244d0ba5f9d8906bde127ec96f8513bd8330" exitCode=0 Feb 26 08:58:03 crc kubenswrapper[4741]: I0226 08:58:03.519738 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" event={"ID":"990a6803-9d8a-4544-818f-0979ab2292bf","Type":"ContainerDied","Data":"65d32d25ea966cfe09e86534d1f6244d0ba5f9d8906bde127ec96f8513bd8330"} Feb 26 08:58:03 crc kubenswrapper[4741]: I0226 08:58:03.525341 4741 generic.go:334] "Generic (PLEG): container finished" podID="ee0d6379-443d-44e8-8542-e395afe813db" containerID="c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028" exitCode=0 Feb 26 08:58:03 crc kubenswrapper[4741]: I0226 08:58:03.525402 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerDied","Data":"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028"} Feb 26 08:58:04 crc kubenswrapper[4741]: I0226 08:58:04.588679 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerStarted","Data":"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c"} Feb 26 08:58:04 crc kubenswrapper[4741]: I0226 08:58:04.657845 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsnss" podStartSLOduration=2.904588433 podStartE2EDuration="6.657818647s" podCreationTimestamp="2026-02-26 08:57:58 +0000 UTC" firstStartedPulling="2026-02-26 08:58:00.450703765 +0000 UTC m=+2715.446641142" lastFinishedPulling="2026-02-26 08:58:04.203933969 +0000 UTC m=+2719.199871356" observedRunningTime="2026-02-26 08:58:04.635051759 +0000 UTC m=+2719.630989156" watchObservedRunningTime="2026-02-26 08:58:04.657818647 +0000 UTC m=+2719.653756034" Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.172956 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.249587 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4q5n\" (UniqueName: \"kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n\") pod \"990a6803-9d8a-4544-818f-0979ab2292bf\" (UID: \"990a6803-9d8a-4544-818f-0979ab2292bf\") " Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.265706 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n" (OuterVolumeSpecName: "kube-api-access-b4q5n") pod "990a6803-9d8a-4544-818f-0979ab2292bf" (UID: "990a6803-9d8a-4544-818f-0979ab2292bf"). InnerVolumeSpecName "kube-api-access-b4q5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.353451 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4q5n\" (UniqueName: \"kubernetes.io/projected/990a6803-9d8a-4544-818f-0979ab2292bf-kube-api-access-b4q5n\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.602056 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" event={"ID":"990a6803-9d8a-4544-818f-0979ab2292bf","Type":"ContainerDied","Data":"09fc11201e32cdde063064ca1493ee2dda297460bbd0a4a9e037f1a26fec2cd8"} Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.602156 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fc11201e32cdde063064ca1493ee2dda297460bbd0a4a9e037f1a26fec2cd8" Feb 26 08:58:05 crc kubenswrapper[4741]: I0226 08:58:05.602159 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534938-mkf4d" Feb 26 08:58:06 crc kubenswrapper[4741]: I0226 08:58:06.273761 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534932-cvlg9"] Feb 26 08:58:06 crc kubenswrapper[4741]: I0226 08:58:06.287463 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534932-cvlg9"] Feb 26 08:58:07 crc kubenswrapper[4741]: I0226 08:58:07.803523 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8" path="/var/lib/kubelet/pods/acaa57c7-2dc7-49a7-8e6b-a661cfbfe6e8/volumes" Feb 26 08:58:08 crc kubenswrapper[4741]: I0226 08:58:08.484339 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:08 crc kubenswrapper[4741]: I0226 08:58:08.487547 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:09 crc kubenswrapper[4741]: I0226 08:58:09.551436 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xsnss" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="registry-server" probeResult="failure" output=< Feb 26 08:58:09 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:58:09 crc kubenswrapper[4741]: > Feb 26 08:58:18 crc kubenswrapper[4741]: I0226 08:58:18.566889 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:18 crc kubenswrapper[4741]: I0226 08:58:18.623804 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:18 crc kubenswrapper[4741]: I0226 08:58:18.829990 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:58:19 crc kubenswrapper[4741]: I0226 08:58:19.826273 4741 generic.go:334] "Generic (PLEG): container finished" podID="34921c87-3a4a-4be3-8a8e-8cae7baf4785" containerID="ba569d4540731c9d9de92abee23e24ead0d66775cf4affab3b9ba74657b78bcc" exitCode=0 Feb 26 08:58:19 crc kubenswrapper[4741]: I0226 08:58:19.826429 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" event={"ID":"34921c87-3a4a-4be3-8a8e-8cae7baf4785","Type":"ContainerDied","Data":"ba569d4540731c9d9de92abee23e24ead0d66775cf4affab3b9ba74657b78bcc"} Feb 26 08:58:19 crc kubenswrapper[4741]: I0226 08:58:19.826624 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsnss" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="registry-server" containerID="cri-o://a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c" gracePeriod=2 Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.424366 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.570446 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities\") pod \"ee0d6379-443d-44e8-8542-e395afe813db\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.570620 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mz67\" (UniqueName: \"kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67\") pod \"ee0d6379-443d-44e8-8542-e395afe813db\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.571047 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content\") pod \"ee0d6379-443d-44e8-8542-e395afe813db\" (UID: \"ee0d6379-443d-44e8-8542-e395afe813db\") " Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.571608 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities" (OuterVolumeSpecName: "utilities") pod "ee0d6379-443d-44e8-8542-e395afe813db" (UID: "ee0d6379-443d-44e8-8542-e395afe813db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.571828 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.581011 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67" (OuterVolumeSpecName: "kube-api-access-4mz67") pod "ee0d6379-443d-44e8-8542-e395afe813db" (UID: "ee0d6379-443d-44e8-8542-e395afe813db"). InnerVolumeSpecName "kube-api-access-4mz67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.628424 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee0d6379-443d-44e8-8542-e395afe813db" (UID: "ee0d6379-443d-44e8-8542-e395afe813db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.703813 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0d6379-443d-44e8-8542-e395afe813db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.703870 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mz67\" (UniqueName: \"kubernetes.io/projected/ee0d6379-443d-44e8-8542-e395afe813db-kube-api-access-4mz67\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.852693 4741 generic.go:334] "Generic (PLEG): container finished" podID="ee0d6379-443d-44e8-8542-e395afe813db" containerID="a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c" exitCode=0 Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.852976 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnss" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.859166 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerDied","Data":"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c"} Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.859209 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnss" event={"ID":"ee0d6379-443d-44e8-8542-e395afe813db","Type":"ContainerDied","Data":"9b13970ea36975bd1a5ef68d4241a051f271a248fc947432e78832202d8fab5e"} Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.859241 4741 scope.go:117] "RemoveContainer" containerID="a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c" Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.905184 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.946750 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsnss"] Feb 26 08:58:20 crc kubenswrapper[4741]: I0226 08:58:20.981874 4741 scope.go:117] "RemoveContainer" containerID="c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.141690 4741 scope.go:117] "RemoveContainer" containerID="a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.178529 4741 scope.go:117] "RemoveContainer" containerID="a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c" Feb 26 08:58:21 crc kubenswrapper[4741]: E0226 08:58:21.179869 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c\": container with ID starting with a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c not found: ID does not exist" containerID="a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.179936 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c"} err="failed to get container status \"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c\": rpc error: code = NotFound desc = could not find container \"a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c\": container with ID starting with a3f8cd483c09cdf5f6a3faa05c0614f6290989d021f948b8a10b0b343c92715c not found: ID does not exist" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.179976 4741 scope.go:117] "RemoveContainer" containerID="c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028" Feb 26 08:58:21 crc kubenswrapper[4741]: E0226 08:58:21.180506 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028\": container with ID starting with c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028 not found: ID does not exist" containerID="c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.180557 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028"} err="failed to get container status \"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028\": rpc error: code = NotFound desc = could not find container \"c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028\": container with ID starting with c7caf24db6eada413bee3b9f6413d74744e02c333e224b8b5b605f4a928db028 not found: ID does not exist" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.180581 4741 scope.go:117] "RemoveContainer" containerID="a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2" Feb 26 08:58:21 crc kubenswrapper[4741]: E0226 08:58:21.181744 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2\": container with ID starting with a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2 not found: ID does not exist" containerID="a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.181770 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2"} err="failed to get container status \"a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2\": rpc error: code = NotFound desc = could not find container \"a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2\": container with ID starting with a792ee5f2473e0d0ffe5d809d64e83085612c739282ec6a502eb7b005a7a24d2 not found: ID does not exist" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.648882 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.806771 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0d6379-443d-44e8-8542-e395afe813db" path="/var/lib/kubelet/pods/ee0d6379-443d-44e8-8542-e395afe813db/volumes" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.845836 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.846013 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djsjs\" (UniqueName: \"kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.847015 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.847260 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.847475 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.847754 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0\") pod \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\" (UID: \"34921c87-3a4a-4be3-8a8e-8cae7baf4785\") " Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.854187 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs" (OuterVolumeSpecName: "kube-api-access-djsjs") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "kube-api-access-djsjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.860461 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.869231 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" event={"ID":"34921c87-3a4a-4be3-8a8e-8cae7baf4785","Type":"ContainerDied","Data":"f59704c5546ed2c0eb983e0291a668097281b8f19bd39b0270c54d27eb6be532"} Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.869304 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59704c5546ed2c0eb983e0291a668097281b8f19bd39b0270c54d27eb6be532" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.870138 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.893437 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory" (OuterVolumeSpecName: "inventory") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.895609 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.914956 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.929369 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34921c87-3a4a-4be3-8a8e-8cae7baf4785" (UID: "34921c87-3a4a-4be3-8a8e-8cae7baf4785"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952736 4741 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952792 4741 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952809 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952825 4741 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952841 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34921c87-3a4a-4be3-8a8e-8cae7baf4785-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:21 crc kubenswrapper[4741]: I0226 08:58:21.952855 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djsjs\" (UniqueName: \"kubernetes.io/projected/34921c87-3a4a-4be3-8a8e-8cae7baf4785-kube-api-access-djsjs\") on node \"crc\" DevicePath \"\"" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.068523 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9"] Feb 26 08:58:22 crc kubenswrapper[4741]: E0226 08:58:22.069668 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="extract-content" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.069692 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="extract-content" Feb 26 08:58:22 crc kubenswrapper[4741]: E0226 08:58:22.069739 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990a6803-9d8a-4544-818f-0979ab2292bf" containerName="oc" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.069747 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="990a6803-9d8a-4544-818f-0979ab2292bf" containerName="oc" Feb 26 08:58:22 crc kubenswrapper[4741]: E0226 08:58:22.069784 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="extract-utilities" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.069791 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="extract-utilities" Feb 26 08:58:22 crc kubenswrapper[4741]: E0226 08:58:22.069800 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="registry-server" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.069806 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="registry-server" Feb 26 08:58:22 crc kubenswrapper[4741]: E0226 08:58:22.069821 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34921c87-3a4a-4be3-8a8e-8cae7baf4785" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.069830 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="34921c87-3a4a-4be3-8a8e-8cae7baf4785" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.070192 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="990a6803-9d8a-4544-818f-0979ab2292bf" containerName="oc" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.070230 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="34921c87-3a4a-4be3-8a8e-8cae7baf4785" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.070274 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0d6379-443d-44e8-8542-e395afe813db" containerName="registry-server" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.071610 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.076489 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.083489 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9"] Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.260985 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.261039 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.261148 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.261243 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.261567 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5f74\" (UniqueName: \"kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.364297 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.364376 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.364461 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.364581 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.364653 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5f74\" (UniqueName: \"kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.369662 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.370688 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.371654 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.374006 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.383880 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5f74\" (UniqueName: \"kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:22 crc kubenswrapper[4741]: I0226 08:58:22.457245 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 08:58:23 crc kubenswrapper[4741]: I0226 08:58:23.027298 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9"] Feb 26 08:58:23 crc kubenswrapper[4741]: I0226 08:58:23.038969 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 08:58:23 crc kubenswrapper[4741]: I0226 08:58:23.899526 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" event={"ID":"93812dcb-b40b-467e-8831-83b017ebd77b","Type":"ContainerStarted","Data":"0b7e511bf267186d6c5eb9f8c51dbe546c9d555de28a93615f6d95975d5afebd"} Feb 26 08:58:23 crc kubenswrapper[4741]: I0226 08:58:23.900146 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" event={"ID":"93812dcb-b40b-467e-8831-83b017ebd77b","Type":"ContainerStarted","Data":"d1e7094f872ec8693b52d821add7dd63824ca2d361836d149c5ffa0f07124e19"} Feb 26 08:58:23 crc kubenswrapper[4741]: I0226 08:58:23.926979 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" podStartSLOduration=1.4509290479999999 podStartE2EDuration="1.926958797s" podCreationTimestamp="2026-02-26 08:58:22 +0000 UTC" firstStartedPulling="2026-02-26 08:58:23.038715277 +0000 UTC m=+2738.034652664" lastFinishedPulling="2026-02-26 08:58:23.514745006 +0000 UTC m=+2738.510682413" observedRunningTime="2026-02-26 08:58:23.921045549 +0000 UTC m=+2738.916982946" watchObservedRunningTime="2026-02-26 08:58:23.926958797 +0000 UTC m=+2738.922896184" Feb 26 08:59:01 crc kubenswrapper[4741]: I0226 08:59:01.746755 4741 scope.go:117] "RemoveContainer" containerID="752732535d2ef0e171700bf129f4addff73f1eaa7ea5acb8a22a8bc7de6ad8a1" Feb 26 08:59:34 crc kubenswrapper[4741]: I0226 08:59:34.994230 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 08:59:34 crc kubenswrapper[4741]: I0226 08:59:34.999639 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.057971 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.144055 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.144167 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.144856 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drbx\" (UniqueName: \"kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.248074 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drbx\" (UniqueName: \"kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.248235 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.248278 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.249024 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.249055 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.277217 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drbx\" (UniqueName: \"kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx\") pod \"redhat-operators-pbf8m\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.335792 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:35 crc kubenswrapper[4741]: I0226 08:59:35.922689 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 08:59:36 crc kubenswrapper[4741]: I0226 08:59:36.904304 4741 generic.go:334] "Generic (PLEG): container finished" podID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerID="f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0" exitCode=0 Feb 26 08:59:36 crc kubenswrapper[4741]: I0226 08:59:36.904431 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerDied","Data":"f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0"} Feb 26 08:59:36 crc kubenswrapper[4741]: I0226 08:59:36.904995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerStarted","Data":"f74154aa145599cb3595a83ec5f82ab85e612676e740d5f2d82eba67baaddeb2"} Feb 26 08:59:37 crc kubenswrapper[4741]: I0226 08:59:37.918694 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerStarted","Data":"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f"} Feb 26 08:59:46 crc kubenswrapper[4741]: I0226 08:59:46.038363 4741 generic.go:334] "Generic (PLEG): container finished" podID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerID="fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f" exitCode=0 Feb 26 08:59:46 crc kubenswrapper[4741]: I0226 08:59:46.039468 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerDied","Data":"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f"} Feb 26 08:59:48 crc kubenswrapper[4741]: I0226 08:59:48.076170 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerStarted","Data":"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc"} Feb 26 08:59:48 crc kubenswrapper[4741]: I0226 08:59:48.123493 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pbf8m" podStartSLOduration=4.274974616 podStartE2EDuration="14.12346999s" podCreationTimestamp="2026-02-26 08:59:34 +0000 UTC" firstStartedPulling="2026-02-26 08:59:36.90698568 +0000 UTC m=+2811.902923067" lastFinishedPulling="2026-02-26 08:59:46.755481054 +0000 UTC m=+2821.751418441" observedRunningTime="2026-02-26 08:59:48.108048391 +0000 UTC m=+2823.103985778" watchObservedRunningTime="2026-02-26 08:59:48.12346999 +0000 UTC m=+2823.119407377" Feb 26 08:59:55 crc kubenswrapper[4741]: I0226 08:59:55.149600 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 08:59:55 crc kubenswrapper[4741]: I0226 08:59:55.150171 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 08:59:55 crc kubenswrapper[4741]: I0226 08:59:55.336795 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:55 crc kubenswrapper[4741]: I0226 08:59:55.336865 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 08:59:56 crc kubenswrapper[4741]: I0226 08:59:56.392664 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pbf8m" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" probeResult="failure" output=< Feb 26 08:59:56 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 08:59:56 crc kubenswrapper[4741]: > Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.187332 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534940-t9xj2"] Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.190274 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.193927 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.194132 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.199607 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.203655 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94"] Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.231823 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.235305 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.235604 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.236667 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534940-t9xj2"] Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.240242 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94"] Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.299048 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772hj\" (UniqueName: \"kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj\") pod \"auto-csr-approver-29534940-t9xj2\" (UID: \"2ea04f7e-f160-4716-8a92-7ef5dc43a822\") " pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.403245 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6mb\" (UniqueName: \"kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.404315 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772hj\" (UniqueName: \"kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj\") pod \"auto-csr-approver-29534940-t9xj2\" (UID: \"2ea04f7e-f160-4716-8a92-7ef5dc43a822\") " pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.404793 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.404860 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.429996 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772hj\" (UniqueName: \"kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj\") pod \"auto-csr-approver-29534940-t9xj2\" (UID: \"2ea04f7e-f160-4716-8a92-7ef5dc43a822\") " pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.508883 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.508953 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.509205 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk6mb\" (UniqueName: \"kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.510701 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.514971 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.528824 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk6mb\" (UniqueName: \"kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb\") pod \"collect-profiles-29534940-5jt94\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.536312 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:00 crc kubenswrapper[4741]: I0226 09:00:00.563933 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:01 crc kubenswrapper[4741]: I0226 09:00:01.104157 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534940-t9xj2"] Feb 26 09:00:01 crc kubenswrapper[4741]: I0226 09:00:01.231454 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94"] Feb 26 09:00:01 crc kubenswrapper[4741]: I0226 09:00:01.255203 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" event={"ID":"f162d4a6-995e-4b7a-b735-bab007914a24","Type":"ContainerStarted","Data":"0e837afc179f023573d3a66ada9993fa206cb9d80a6aaee48fdecee962576353"} Feb 26 09:00:01 crc kubenswrapper[4741]: I0226 09:00:01.257789 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" event={"ID":"2ea04f7e-f160-4716-8a92-7ef5dc43a822","Type":"ContainerStarted","Data":"0ec33983e7e7769fc3ed4fecc293c861d6eebf86bf23e57f92a023f2619c3576"} Feb 26 09:00:02 crc kubenswrapper[4741]: I0226 09:00:02.277641 4741 generic.go:334] "Generic (PLEG): container finished" podID="f162d4a6-995e-4b7a-b735-bab007914a24" containerID="0366502ebcc456aae1376be25fd9f0433d30f7d434ea9be58fbd91e485ba8922" exitCode=0 Feb 26 09:00:02 crc kubenswrapper[4741]: I0226 09:00:02.277756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" event={"ID":"f162d4a6-995e-4b7a-b735-bab007914a24","Type":"ContainerDied","Data":"0366502ebcc456aae1376be25fd9f0433d30f7d434ea9be58fbd91e485ba8922"} Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.849674 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.941163 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume\") pod \"f162d4a6-995e-4b7a-b735-bab007914a24\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.941272 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk6mb\" (UniqueName: \"kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb\") pod \"f162d4a6-995e-4b7a-b735-bab007914a24\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.941455 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume\") pod \"f162d4a6-995e-4b7a-b735-bab007914a24\" (UID: \"f162d4a6-995e-4b7a-b735-bab007914a24\") " Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.942256 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume" (OuterVolumeSpecName: "config-volume") pod "f162d4a6-995e-4b7a-b735-bab007914a24" (UID: "f162d4a6-995e-4b7a-b735-bab007914a24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.951051 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f162d4a6-995e-4b7a-b735-bab007914a24" (UID: "f162d4a6-995e-4b7a-b735-bab007914a24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:00:03 crc kubenswrapper[4741]: I0226 09:00:03.958103 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb" (OuterVolumeSpecName: "kube-api-access-jk6mb") pod "f162d4a6-995e-4b7a-b735-bab007914a24" (UID: "f162d4a6-995e-4b7a-b735-bab007914a24"). InnerVolumeSpecName "kube-api-access-jk6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.045929 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f162d4a6-995e-4b7a-b735-bab007914a24-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.045984 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f162d4a6-995e-4b7a-b735-bab007914a24-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.046001 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk6mb\" (UniqueName: \"kubernetes.io/projected/f162d4a6-995e-4b7a-b735-bab007914a24-kube-api-access-jk6mb\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.306873 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" event={"ID":"f162d4a6-995e-4b7a-b735-bab007914a24","Type":"ContainerDied","Data":"0e837afc179f023573d3a66ada9993fa206cb9d80a6aaee48fdecee962576353"} Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.307525 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e837afc179f023573d3a66ada9993fa206cb9d80a6aaee48fdecee962576353" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.306973 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94" Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.959951 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh"] Feb 26 09:00:04 crc kubenswrapper[4741]: I0226 09:00:04.975648 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534895-b2gkh"] Feb 26 09:00:05 crc kubenswrapper[4741]: I0226 09:00:05.843838 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fc717c-df6a-4ba5-a998-6385257e6f7e" path="/var/lib/kubelet/pods/b4fc717c-df6a-4ba5-a998-6385257e6f7e/volumes" Feb 26 09:00:06 crc kubenswrapper[4741]: I0226 09:00:06.398585 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pbf8m" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" probeResult="failure" output=< Feb 26 09:00:06 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:00:06 crc kubenswrapper[4741]: > Feb 26 09:00:12 crc kubenswrapper[4741]: I0226 09:00:12.429256 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" event={"ID":"2ea04f7e-f160-4716-8a92-7ef5dc43a822","Type":"ContainerStarted","Data":"3ccc70103c9e4972bfaecb958c7f8e1ff9c18388652e9cd6836e78753fe12064"} Feb 26 09:00:12 crc kubenswrapper[4741]: I0226 09:00:12.461830 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" podStartSLOduration=1.802197957 podStartE2EDuration="12.461800077s" podCreationTimestamp="2026-02-26 09:00:00 +0000 UTC" firstStartedPulling="2026-02-26 09:00:01.110217822 +0000 UTC m=+2836.106155199" lastFinishedPulling="2026-02-26 09:00:11.769819932 +0000 UTC m=+2846.765757319" observedRunningTime="2026-02-26 09:00:12.455383044 +0000 UTC m=+2847.451320471" watchObservedRunningTime="2026-02-26 09:00:12.461800077 +0000 UTC m=+2847.457737494" Feb 26 09:00:13 crc kubenswrapper[4741]: I0226 09:00:13.450918 4741 generic.go:334] "Generic (PLEG): container finished" podID="2ea04f7e-f160-4716-8a92-7ef5dc43a822" containerID="3ccc70103c9e4972bfaecb958c7f8e1ff9c18388652e9cd6836e78753fe12064" exitCode=0 Feb 26 09:00:13 crc kubenswrapper[4741]: I0226 09:00:13.451075 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" event={"ID":"2ea04f7e-f160-4716-8a92-7ef5dc43a822","Type":"ContainerDied","Data":"3ccc70103c9e4972bfaecb958c7f8e1ff9c18388652e9cd6836e78753fe12064"} Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.017742 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.129228 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-772hj\" (UniqueName: \"kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj\") pod \"2ea04f7e-f160-4716-8a92-7ef5dc43a822\" (UID: \"2ea04f7e-f160-4716-8a92-7ef5dc43a822\") " Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.139310 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj" (OuterVolumeSpecName: "kube-api-access-772hj") pod "2ea04f7e-f160-4716-8a92-7ef5dc43a822" (UID: "2ea04f7e-f160-4716-8a92-7ef5dc43a822"). InnerVolumeSpecName "kube-api-access-772hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.234368 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-772hj\" (UniqueName: \"kubernetes.io/projected/2ea04f7e-f160-4716-8a92-7ef5dc43a822-kube-api-access-772hj\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.435583 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.500214 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" event={"ID":"2ea04f7e-f160-4716-8a92-7ef5dc43a822","Type":"ContainerDied","Data":"0ec33983e7e7769fc3ed4fecc293c861d6eebf86bf23e57f92a023f2619c3576"} Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.500264 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec33983e7e7769fc3ed4fecc293c861d6eebf86bf23e57f92a023f2619c3576" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.500397 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534940-t9xj2" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.518346 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.541457 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534934-nscjz"] Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.586310 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534934-nscjz"] Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.686881 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 09:00:15 crc kubenswrapper[4741]: I0226 09:00:15.825295 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507b75c8-121a-4d89-95a2-f2a480783291" path="/var/lib/kubelet/pods/507b75c8-121a-4d89-95a2-f2a480783291/volumes" Feb 26 09:00:16 crc kubenswrapper[4741]: I0226 09:00:16.520698 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pbf8m" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" containerID="cri-o://ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc" gracePeriod=2 Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.201453 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.307164 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content\") pod \"db228aa4-fed8-4b8b-ad5b-56cec6617487\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.307621 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7drbx\" (UniqueName: \"kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx\") pod \"db228aa4-fed8-4b8b-ad5b-56cec6617487\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.307908 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities\") pod \"db228aa4-fed8-4b8b-ad5b-56cec6617487\" (UID: \"db228aa4-fed8-4b8b-ad5b-56cec6617487\") " Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.308768 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities" (OuterVolumeSpecName: "utilities") pod "db228aa4-fed8-4b8b-ad5b-56cec6617487" (UID: "db228aa4-fed8-4b8b-ad5b-56cec6617487"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.323425 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx" (OuterVolumeSpecName: "kube-api-access-7drbx") pod "db228aa4-fed8-4b8b-ad5b-56cec6617487" (UID: "db228aa4-fed8-4b8b-ad5b-56cec6617487"). InnerVolumeSpecName "kube-api-access-7drbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.412000 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7drbx\" (UniqueName: \"kubernetes.io/projected/db228aa4-fed8-4b8b-ad5b-56cec6617487-kube-api-access-7drbx\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.412058 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.443569 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db228aa4-fed8-4b8b-ad5b-56cec6617487" (UID: "db228aa4-fed8-4b8b-ad5b-56cec6617487"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.515525 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db228aa4-fed8-4b8b-ad5b-56cec6617487-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.539345 4741 generic.go:334] "Generic (PLEG): container finished" podID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerID="ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc" exitCode=0 Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.539441 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerDied","Data":"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc"} Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.539454 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbf8m" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.539496 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbf8m" event={"ID":"db228aa4-fed8-4b8b-ad5b-56cec6617487","Type":"ContainerDied","Data":"f74154aa145599cb3595a83ec5f82ab85e612676e740d5f2d82eba67baaddeb2"} Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.539531 4741 scope.go:117] "RemoveContainer" containerID="ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.594396 4741 scope.go:117] "RemoveContainer" containerID="fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.631917 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.649299 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pbf8m"] Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.689597 4741 scope.go:117] "RemoveContainer" containerID="f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.725153 4741 scope.go:117] "RemoveContainer" containerID="ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc" Feb 26 09:00:17 crc kubenswrapper[4741]: E0226 09:00:17.726589 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc\": container with ID starting with ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc not found: ID does not exist" containerID="ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.726625 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc"} err="failed to get container status \"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc\": rpc error: code = NotFound desc = could not find container \"ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc\": container with ID starting with ad8a10377b86f3a98d3805a98db10d6b62fc222eff565434fa87c926e133b2cc not found: ID does not exist" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.726651 4741 scope.go:117] "RemoveContainer" containerID="fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f" Feb 26 09:00:17 crc kubenswrapper[4741]: E0226 09:00:17.729034 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f\": container with ID starting with fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f not found: ID does not exist" containerID="fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.729095 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f"} err="failed to get container status \"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f\": rpc error: code = NotFound desc = could not find container \"fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f\": container with ID starting with fa92bd55cc5a5b97216939c28cada4e92d81f2f237a5d2304d2ad863154fa89f not found: ID does not exist" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.729155 4741 scope.go:117] "RemoveContainer" containerID="f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0" Feb 26 09:00:17 crc kubenswrapper[4741]: E0226 09:00:17.729709 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0\": container with ID starting with f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0 not found: ID does not exist" containerID="f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.729747 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0"} err="failed to get container status \"f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0\": rpc error: code = NotFound desc = could not find container \"f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0\": container with ID starting with f1b75ccf0356b4db71fe0857d29f6aec7ecaa380353b1c9eec52defeb444c9e0 not found: ID does not exist" Feb 26 09:00:17 crc kubenswrapper[4741]: I0226 09:00:17.807525 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" path="/var/lib/kubelet/pods/db228aa4-fed8-4b8b-ad5b-56cec6617487/volumes" Feb 26 09:00:25 crc kubenswrapper[4741]: I0226 09:00:25.149070 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:00:25 crc kubenswrapper[4741]: I0226 09:00:25.149893 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:00:55 crc kubenswrapper[4741]: I0226 09:00:55.150042 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:00:55 crc kubenswrapper[4741]: I0226 09:00:55.150945 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:00:55 crc kubenswrapper[4741]: I0226 09:00:55.151072 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:00:55 crc kubenswrapper[4741]: I0226 09:00:55.152950 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:00:55 crc kubenswrapper[4741]: I0226 09:00:55.153035 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4" gracePeriod=600 Feb 26 09:00:56 crc kubenswrapper[4741]: I0226 09:00:56.131747 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4" exitCode=0 Feb 26 09:00:56 crc kubenswrapper[4741]: I0226 09:00:56.132666 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4"} Feb 26 09:00:56 crc kubenswrapper[4741]: I0226 09:00:56.132701 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe"} Feb 26 09:00:56 crc kubenswrapper[4741]: I0226 09:00:56.132718 4741 scope.go:117] "RemoveContainer" containerID="1f0127cd26815e6a6cbe76f6df9b3e6fd57765c67b0f8d669205b385bf6a9f38" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.188709 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29534941-wlmm8"] Feb 26 09:01:00 crc kubenswrapper[4741]: E0226 09:01:00.190628 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea04f7e-f160-4716-8a92-7ef5dc43a822" containerName="oc" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.190652 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea04f7e-f160-4716-8a92-7ef5dc43a822" containerName="oc" Feb 26 09:01:00 crc kubenswrapper[4741]: E0226 09:01:00.190685 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.190699 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" Feb 26 09:01:00 crc kubenswrapper[4741]: E0226 09:01:00.190733 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f162d4a6-995e-4b7a-b735-bab007914a24" containerName="collect-profiles" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.190745 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f162d4a6-995e-4b7a-b735-bab007914a24" containerName="collect-profiles" Feb 26 09:01:00 crc kubenswrapper[4741]: E0226 09:01:00.190779 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="extract-content" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.190791 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="extract-content" Feb 26 09:01:00 crc kubenswrapper[4741]: E0226 09:01:00.190851 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="extract-utilities" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.190866 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="extract-utilities" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.191303 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f162d4a6-995e-4b7a-b735-bab007914a24" containerName="collect-profiles" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.191349 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea04f7e-f160-4716-8a92-7ef5dc43a822" containerName="oc" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.191368 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="db228aa4-fed8-4b8b-ad5b-56cec6617487" containerName="registry-server" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.192943 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.205551 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29534941-wlmm8"] Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.302048 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.302561 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.302636 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.302678 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtnn\" (UniqueName: \"kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.405057 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.405211 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbtnn\" (UniqueName: \"kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.405405 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.405540 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.414881 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.415950 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.418934 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.430136 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbtnn\" (UniqueName: \"kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn\") pod \"keystone-cron-29534941-wlmm8\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:00 crc kubenswrapper[4741]: I0226 09:01:00.537940 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:01 crc kubenswrapper[4741]: I0226 09:01:01.088624 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29534941-wlmm8"] Feb 26 09:01:01 crc kubenswrapper[4741]: I0226 09:01:01.206610 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29534941-wlmm8" event={"ID":"78823a9f-87b3-4f75-be1d-943051329769","Type":"ContainerStarted","Data":"421365a3f84b3ec02e9002ef7f6b1da66f4bd1df1e7a763d812932d800431f53"} Feb 26 09:01:01 crc kubenswrapper[4741]: I0226 09:01:01.923790 4741 scope.go:117] "RemoveContainer" containerID="0553f6ac15d04eaa843de47fdb335c1973288689586121cf0ac6c5c6c367b9fa" Feb 26 09:01:01 crc kubenswrapper[4741]: I0226 09:01:01.968075 4741 scope.go:117] "RemoveContainer" containerID="8dcadd91ed6e0ee7830d93cee38d9964c0589c125872c2630d562e914dc072e3" Feb 26 09:01:02 crc kubenswrapper[4741]: I0226 09:01:02.226730 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29534941-wlmm8" event={"ID":"78823a9f-87b3-4f75-be1d-943051329769","Type":"ContainerStarted","Data":"a0611dde6d12142357698f66560afc3d7e29184ffceacab8052b3fe6dd192ec0"} Feb 26 09:01:02 crc kubenswrapper[4741]: I0226 09:01:02.259324 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29534941-wlmm8" podStartSLOduration=2.25929544 podStartE2EDuration="2.25929544s" podCreationTimestamp="2026-02-26 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:01:02.248505254 +0000 UTC m=+2897.244442671" watchObservedRunningTime="2026-02-26 09:01:02.25929544 +0000 UTC m=+2897.255232857" Feb 26 09:01:05 crc kubenswrapper[4741]: I0226 09:01:05.294428 4741 generic.go:334] "Generic (PLEG): container finished" podID="78823a9f-87b3-4f75-be1d-943051329769" containerID="a0611dde6d12142357698f66560afc3d7e29184ffceacab8052b3fe6dd192ec0" exitCode=0 Feb 26 09:01:05 crc kubenswrapper[4741]: I0226 09:01:05.294941 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29534941-wlmm8" event={"ID":"78823a9f-87b3-4f75-be1d-943051329769","Type":"ContainerDied","Data":"a0611dde6d12142357698f66560afc3d7e29184ffceacab8052b3fe6dd192ec0"} Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.858896 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.972420 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle\") pod \"78823a9f-87b3-4f75-be1d-943051329769\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.972581 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys\") pod \"78823a9f-87b3-4f75-be1d-943051329769\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.973170 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbtnn\" (UniqueName: \"kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn\") pod \"78823a9f-87b3-4f75-be1d-943051329769\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.981027 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data\") pod \"78823a9f-87b3-4f75-be1d-943051329769\" (UID: \"78823a9f-87b3-4f75-be1d-943051329769\") " Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.985171 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn" (OuterVolumeSpecName: "kube-api-access-tbtnn") pod "78823a9f-87b3-4f75-be1d-943051329769" (UID: "78823a9f-87b3-4f75-be1d-943051329769"). InnerVolumeSpecName "kube-api-access-tbtnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.986304 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbtnn\" (UniqueName: \"kubernetes.io/projected/78823a9f-87b3-4f75-be1d-943051329769-kube-api-access-tbtnn\") on node \"crc\" DevicePath \"\"" Feb 26 09:01:06 crc kubenswrapper[4741]: I0226 09:01:06.987308 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "78823a9f-87b3-4f75-be1d-943051329769" (UID: "78823a9f-87b3-4f75-be1d-943051329769"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.014415 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78823a9f-87b3-4f75-be1d-943051329769" (UID: "78823a9f-87b3-4f75-be1d-943051329769"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.046794 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data" (OuterVolumeSpecName: "config-data") pod "78823a9f-87b3-4f75-be1d-943051329769" (UID: "78823a9f-87b3-4f75-be1d-943051329769"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.090090 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.090160 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.090180 4741 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78823a9f-87b3-4f75-be1d-943051329769-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.327272 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29534941-wlmm8" event={"ID":"78823a9f-87b3-4f75-be1d-943051329769","Type":"ContainerDied","Data":"421365a3f84b3ec02e9002ef7f6b1da66f4bd1df1e7a763d812932d800431f53"} Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.327328 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421365a3f84b3ec02e9002ef7f6b1da66f4bd1df1e7a763d812932d800431f53" Feb 26 09:01:07 crc kubenswrapper[4741]: I0226 09:01:07.327390 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29534941-wlmm8" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.176019 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534942-ht4jp"] Feb 26 09:02:00 crc kubenswrapper[4741]: E0226 09:02:00.177461 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78823a9f-87b3-4f75-be1d-943051329769" containerName="keystone-cron" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.177477 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="78823a9f-87b3-4f75-be1d-943051329769" containerName="keystone-cron" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.177806 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="78823a9f-87b3-4f75-be1d-943051329769" containerName="keystone-cron" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.179000 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.191564 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534942-ht4jp"] Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.196745 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.197337 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.202439 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.265062 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l25f7\" (UniqueName: \"kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7\") pod \"auto-csr-approver-29534942-ht4jp\" (UID: \"7f5322e6-e663-4e7f-aa5c-a54647d23a4f\") " pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.367418 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l25f7\" (UniqueName: \"kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7\") pod \"auto-csr-approver-29534942-ht4jp\" (UID: \"7f5322e6-e663-4e7f-aa5c-a54647d23a4f\") " pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.415949 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l25f7\" (UniqueName: \"kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7\") pod \"auto-csr-approver-29534942-ht4jp\" (UID: \"7f5322e6-e663-4e7f-aa5c-a54647d23a4f\") " pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:00 crc kubenswrapper[4741]: I0226 09:02:00.588077 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:01 crc kubenswrapper[4741]: I0226 09:02:01.155077 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534942-ht4jp"] Feb 26 09:02:02 crc kubenswrapper[4741]: I0226 09:02:02.150836 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" event={"ID":"7f5322e6-e663-4e7f-aa5c-a54647d23a4f","Type":"ContainerStarted","Data":"096420d826fecab7fd1f84236c09f9eb16d792341d7c25b3ada334fa875417d4"} Feb 26 09:02:05 crc kubenswrapper[4741]: I0226 09:02:05.200401 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" event={"ID":"7f5322e6-e663-4e7f-aa5c-a54647d23a4f","Type":"ContainerStarted","Data":"0f1055a955b9432d6597c6b2e5c255ed3e18957b440f45b52f5030187ebe7433"} Feb 26 09:02:05 crc kubenswrapper[4741]: I0226 09:02:05.226446 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" podStartSLOduration=1.8218647890000002 podStartE2EDuration="5.226419633s" podCreationTimestamp="2026-02-26 09:02:00 +0000 UTC" firstStartedPulling="2026-02-26 09:02:01.162262693 +0000 UTC m=+2956.158200090" lastFinishedPulling="2026-02-26 09:02:04.566817547 +0000 UTC m=+2959.562754934" observedRunningTime="2026-02-26 09:02:05.21994498 +0000 UTC m=+2960.215882367" watchObservedRunningTime="2026-02-26 09:02:05.226419633 +0000 UTC m=+2960.222357030" Feb 26 09:02:06 crc kubenswrapper[4741]: I0226 09:02:06.224415 4741 generic.go:334] "Generic (PLEG): container finished" podID="7f5322e6-e663-4e7f-aa5c-a54647d23a4f" containerID="0f1055a955b9432d6597c6b2e5c255ed3e18957b440f45b52f5030187ebe7433" exitCode=0 Feb 26 09:02:06 crc kubenswrapper[4741]: I0226 09:02:06.224478 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" event={"ID":"7f5322e6-e663-4e7f-aa5c-a54647d23a4f","Type":"ContainerDied","Data":"0f1055a955b9432d6597c6b2e5c255ed3e18957b440f45b52f5030187ebe7433"} Feb 26 09:02:07 crc kubenswrapper[4741]: I0226 09:02:07.709101 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:07 crc kubenswrapper[4741]: I0226 09:02:07.746850 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l25f7\" (UniqueName: \"kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7\") pod \"7f5322e6-e663-4e7f-aa5c-a54647d23a4f\" (UID: \"7f5322e6-e663-4e7f-aa5c-a54647d23a4f\") " Feb 26 09:02:07 crc kubenswrapper[4741]: I0226 09:02:07.756197 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7" (OuterVolumeSpecName: "kube-api-access-l25f7") pod "7f5322e6-e663-4e7f-aa5c-a54647d23a4f" (UID: "7f5322e6-e663-4e7f-aa5c-a54647d23a4f"). InnerVolumeSpecName "kube-api-access-l25f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:02:07 crc kubenswrapper[4741]: I0226 09:02:07.859138 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l25f7\" (UniqueName: \"kubernetes.io/projected/7f5322e6-e663-4e7f-aa5c-a54647d23a4f-kube-api-access-l25f7\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:08 crc kubenswrapper[4741]: I0226 09:02:08.252013 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" event={"ID":"7f5322e6-e663-4e7f-aa5c-a54647d23a4f","Type":"ContainerDied","Data":"096420d826fecab7fd1f84236c09f9eb16d792341d7c25b3ada334fa875417d4"} Feb 26 09:02:08 crc kubenswrapper[4741]: I0226 09:02:08.252075 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="096420d826fecab7fd1f84236c09f9eb16d792341d7c25b3ada334fa875417d4" Feb 26 09:02:08 crc kubenswrapper[4741]: I0226 09:02:08.252119 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534942-ht4jp" Feb 26 09:02:08 crc kubenswrapper[4741]: I0226 09:02:08.353804 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534936-bcjv2"] Feb 26 09:02:08 crc kubenswrapper[4741]: I0226 09:02:08.371466 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534936-bcjv2"] Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.807370 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc40d77-5635-4ff0-9c9f-0c5c1c02085e" path="/var/lib/kubelet/pods/4cc40d77-5635-4ff0-9c9f-0c5c1c02085e/volumes" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.871926 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:09 crc kubenswrapper[4741]: E0226 09:02:09.872841 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5322e6-e663-4e7f-aa5c-a54647d23a4f" containerName="oc" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.872876 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5322e6-e663-4e7f-aa5c-a54647d23a4f" containerName="oc" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.873389 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5322e6-e663-4e7f-aa5c-a54647d23a4f" containerName="oc" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.877753 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.893808 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.916235 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.916774 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:09 crc kubenswrapper[4741]: I0226 09:02:09.917373 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rft7p\" (UniqueName: \"kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.019771 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.020465 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.020565 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rft7p\" (UniqueName: \"kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.020572 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.021186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.047640 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rft7p\" (UniqueName: \"kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p\") pod \"redhat-marketplace-bzk4c\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.211030 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:10 crc kubenswrapper[4741]: I0226 09:02:10.821158 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:11 crc kubenswrapper[4741]: I0226 09:02:11.310700 4741 generic.go:334] "Generic (PLEG): container finished" podID="b45c12d5-124b-4b22-9718-74b785a73c51" containerID="4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c" exitCode=0 Feb 26 09:02:11 crc kubenswrapper[4741]: I0226 09:02:11.310823 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerDied","Data":"4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c"} Feb 26 09:02:11 crc kubenswrapper[4741]: I0226 09:02:11.311224 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerStarted","Data":"974bd1987d67336b0e165c62b94522fe7a7cb8c24e4156021259b19fd9b6aa89"} Feb 26 09:02:12 crc kubenswrapper[4741]: I0226 09:02:12.325528 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerStarted","Data":"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679"} Feb 26 09:02:13 crc kubenswrapper[4741]: I0226 09:02:13.343058 4741 generic.go:334] "Generic (PLEG): container finished" podID="b45c12d5-124b-4b22-9718-74b785a73c51" containerID="9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679" exitCode=0 Feb 26 09:02:13 crc kubenswrapper[4741]: I0226 09:02:13.343158 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerDied","Data":"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679"} Feb 26 09:02:14 crc kubenswrapper[4741]: I0226 09:02:14.359188 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerStarted","Data":"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f"} Feb 26 09:02:14 crc kubenswrapper[4741]: I0226 09:02:14.404324 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzk4c" podStartSLOduration=2.914981643 podStartE2EDuration="5.404300403s" podCreationTimestamp="2026-02-26 09:02:09 +0000 UTC" firstStartedPulling="2026-02-26 09:02:11.313854064 +0000 UTC m=+2966.309791471" lastFinishedPulling="2026-02-26 09:02:13.803172834 +0000 UTC m=+2968.799110231" observedRunningTime="2026-02-26 09:02:14.377707539 +0000 UTC m=+2969.373644926" watchObservedRunningTime="2026-02-26 09:02:14.404300403 +0000 UTC m=+2969.400237790" Feb 26 09:02:16 crc kubenswrapper[4741]: I0226 09:02:16.398194 4741 generic.go:334] "Generic (PLEG): container finished" podID="93812dcb-b40b-467e-8831-83b017ebd77b" containerID="0b7e511bf267186d6c5eb9f8c51dbe546c9d555de28a93615f6d95975d5afebd" exitCode=0 Feb 26 09:02:16 crc kubenswrapper[4741]: I0226 09:02:16.398307 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" event={"ID":"93812dcb-b40b-467e-8831-83b017ebd77b","Type":"ContainerDied","Data":"0b7e511bf267186d6c5eb9f8c51dbe546c9d555de28a93615f6d95975d5afebd"} Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.108398 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.129429 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0\") pod \"93812dcb-b40b-467e-8831-83b017ebd77b\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.129479 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle\") pod \"93812dcb-b40b-467e-8831-83b017ebd77b\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.129826 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5f74\" (UniqueName: \"kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74\") pod \"93812dcb-b40b-467e-8831-83b017ebd77b\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.129932 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory\") pod \"93812dcb-b40b-467e-8831-83b017ebd77b\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.130053 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam\") pod \"93812dcb-b40b-467e-8831-83b017ebd77b\" (UID: \"93812dcb-b40b-467e-8831-83b017ebd77b\") " Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.139484 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74" (OuterVolumeSpecName: "kube-api-access-b5f74") pod "93812dcb-b40b-467e-8831-83b017ebd77b" (UID: "93812dcb-b40b-467e-8831-83b017ebd77b"). InnerVolumeSpecName "kube-api-access-b5f74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.146309 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "93812dcb-b40b-467e-8831-83b017ebd77b" (UID: "93812dcb-b40b-467e-8831-83b017ebd77b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.187655 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory" (OuterVolumeSpecName: "inventory") pod "93812dcb-b40b-467e-8831-83b017ebd77b" (UID: "93812dcb-b40b-467e-8831-83b017ebd77b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.188486 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "93812dcb-b40b-467e-8831-83b017ebd77b" (UID: "93812dcb-b40b-467e-8831-83b017ebd77b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.200308 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93812dcb-b40b-467e-8831-83b017ebd77b" (UID: "93812dcb-b40b-467e-8831-83b017ebd77b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.233653 4741 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.233686 4741 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.233697 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5f74\" (UniqueName: \"kubernetes.io/projected/93812dcb-b40b-467e-8831-83b017ebd77b-kube-api-access-b5f74\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.233708 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.233716 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93812dcb-b40b-467e-8831-83b017ebd77b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.426620 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" event={"ID":"93812dcb-b40b-467e-8831-83b017ebd77b","Type":"ContainerDied","Data":"d1e7094f872ec8693b52d821add7dd63824ca2d361836d149c5ffa0f07124e19"} Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.426690 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e7094f872ec8693b52d821add7dd63824ca2d361836d149c5ffa0f07124e19" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.426789 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.577837 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj"] Feb 26 09:02:18 crc kubenswrapper[4741]: E0226 09:02:18.578906 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93812dcb-b40b-467e-8831-83b017ebd77b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.578926 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="93812dcb-b40b-467e-8831-83b017ebd77b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.579369 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="93812dcb-b40b-467e-8831-83b017ebd77b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.580524 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.584310 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.584851 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.584974 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.585034 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.585061 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.585180 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.585475 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.601680 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj"] Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648021 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648197 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648230 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648271 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648656 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648733 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.648818 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.649049 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.649307 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.649471 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp967\" (UniqueName: \"kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.649513 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753085 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753169 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753220 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753345 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753374 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753413 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753468 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753537 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753570 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp967\" (UniqueName: \"kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753590 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.753631 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.755060 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.760017 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.760019 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.766288 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.766425 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.769761 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.770047 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.771737 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.773464 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.774395 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.777591 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp967\" (UniqueName: \"kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ch4rj\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:18 crc kubenswrapper[4741]: I0226 09:02:18.919761 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:02:19 crc kubenswrapper[4741]: I0226 09:02:19.593149 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj"] Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.212098 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.212673 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.322228 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.457455 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" event={"ID":"ebe89e06-bf26-474e-8caf-f29a10b0fb24","Type":"ContainerStarted","Data":"a35483e39275b6d9ce7667561957d6ba106df1bf82f65ca12c79b2f5426795bf"} Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.534284 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:20 crc kubenswrapper[4741]: I0226 09:02:20.635866 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:21 crc kubenswrapper[4741]: I0226 09:02:21.491909 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" event={"ID":"ebe89e06-bf26-474e-8caf-f29a10b0fb24","Type":"ContainerStarted","Data":"7b80271e8dfb2932dfeb477765d1766ad94775b77b37b6c90b5090addaf8fab7"} Feb 26 09:02:21 crc kubenswrapper[4741]: I0226 09:02:21.530781 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" podStartSLOduration=3.11430724 podStartE2EDuration="3.530754974s" podCreationTimestamp="2026-02-26 09:02:18 +0000 UTC" firstStartedPulling="2026-02-26 09:02:19.603051984 +0000 UTC m=+2974.598989371" lastFinishedPulling="2026-02-26 09:02:20.019499678 +0000 UTC m=+2975.015437105" observedRunningTime="2026-02-26 09:02:21.519304689 +0000 UTC m=+2976.515242126" watchObservedRunningTime="2026-02-26 09:02:21.530754974 +0000 UTC m=+2976.526692371" Feb 26 09:02:22 crc kubenswrapper[4741]: I0226 09:02:22.507075 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzk4c" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="registry-server" containerID="cri-o://a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f" gracePeriod=2 Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.226770 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.406747 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content\") pod \"b45c12d5-124b-4b22-9718-74b785a73c51\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.407060 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rft7p\" (UniqueName: \"kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p\") pod \"b45c12d5-124b-4b22-9718-74b785a73c51\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.407285 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities\") pod \"b45c12d5-124b-4b22-9718-74b785a73c51\" (UID: \"b45c12d5-124b-4b22-9718-74b785a73c51\") " Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.408196 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities" (OuterVolumeSpecName: "utilities") pod "b45c12d5-124b-4b22-9718-74b785a73c51" (UID: "b45c12d5-124b-4b22-9718-74b785a73c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.417910 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p" (OuterVolumeSpecName: "kube-api-access-rft7p") pod "b45c12d5-124b-4b22-9718-74b785a73c51" (UID: "b45c12d5-124b-4b22-9718-74b785a73c51"). InnerVolumeSpecName "kube-api-access-rft7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.433283 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b45c12d5-124b-4b22-9718-74b785a73c51" (UID: "b45c12d5-124b-4b22-9718-74b785a73c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.511075 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.511121 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45c12d5-124b-4b22-9718-74b785a73c51-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.511136 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rft7p\" (UniqueName: \"kubernetes.io/projected/b45c12d5-124b-4b22-9718-74b785a73c51-kube-api-access-rft7p\") on node \"crc\" DevicePath \"\"" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.523322 4741 generic.go:334] "Generic (PLEG): container finished" podID="b45c12d5-124b-4b22-9718-74b785a73c51" containerID="a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f" exitCode=0 Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.523442 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzk4c" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.523495 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerDied","Data":"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f"} Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.523914 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzk4c" event={"ID":"b45c12d5-124b-4b22-9718-74b785a73c51","Type":"ContainerDied","Data":"974bd1987d67336b0e165c62b94522fe7a7cb8c24e4156021259b19fd9b6aa89"} Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.523943 4741 scope.go:117] "RemoveContainer" containerID="a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.559728 4741 scope.go:117] "RemoveContainer" containerID="9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.567233 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.582093 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzk4c"] Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.618484 4741 scope.go:117] "RemoveContainer" containerID="4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.659580 4741 scope.go:117] "RemoveContainer" containerID="a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f" Feb 26 09:02:23 crc kubenswrapper[4741]: E0226 09:02:23.660162 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f\": container with ID starting with a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f not found: ID does not exist" containerID="a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.660236 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f"} err="failed to get container status \"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f\": rpc error: code = NotFound desc = could not find container \"a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f\": container with ID starting with a5ca4c8efb934e5ab902b482c007a5b01ead84cac9a9141685ad5f42ea8ca02f not found: ID does not exist" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.660277 4741 scope.go:117] "RemoveContainer" containerID="9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679" Feb 26 09:02:23 crc kubenswrapper[4741]: E0226 09:02:23.661234 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679\": container with ID starting with 9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679 not found: ID does not exist" containerID="9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.661290 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679"} err="failed to get container status \"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679\": rpc error: code = NotFound desc = could not find container \"9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679\": container with ID starting with 9221a4dd34e69db4ade47a6400d660eee4c328b08536e709637670293645b679 not found: ID does not exist" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.661322 4741 scope.go:117] "RemoveContainer" containerID="4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c" Feb 26 09:02:23 crc kubenswrapper[4741]: E0226 09:02:23.661701 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c\": container with ID starting with 4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c not found: ID does not exist" containerID="4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.661740 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c"} err="failed to get container status \"4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c\": rpc error: code = NotFound desc = could not find container \"4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c\": container with ID starting with 4d707898558a8bb446f5a8182fa220f9a50c1a1bceb5d25d791799a218c9969c not found: ID does not exist" Feb 26 09:02:23 crc kubenswrapper[4741]: I0226 09:02:23.810006 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" path="/var/lib/kubelet/pods/b45c12d5-124b-4b22-9718-74b785a73c51/volumes" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.147813 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:02:37 crc kubenswrapper[4741]: E0226 09:02:37.149634 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="registry-server" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.149659 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="registry-server" Feb 26 09:02:37 crc kubenswrapper[4741]: E0226 09:02:37.149702 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="extract-content" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.149718 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="extract-content" Feb 26 09:02:37 crc kubenswrapper[4741]: E0226 09:02:37.149808 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="extract-utilities" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.149821 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="extract-utilities" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.150298 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45c12d5-124b-4b22-9718-74b785a73c51" containerName="registry-server" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.156148 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.164892 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.292000 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.292231 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.293091 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdvr\" (UniqueName: \"kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.396131 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.396355 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msdvr\" (UniqueName: \"kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.396435 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.396719 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.396936 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.431209 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msdvr\" (UniqueName: \"kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr\") pod \"certified-operators-945g6\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:37 crc kubenswrapper[4741]: I0226 09:02:37.516489 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:38 crc kubenswrapper[4741]: I0226 09:02:38.120297 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:02:38 crc kubenswrapper[4741]: I0226 09:02:38.733494 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerID="bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b" exitCode=0 Feb 26 09:02:38 crc kubenswrapper[4741]: I0226 09:02:38.733549 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerDied","Data":"bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b"} Feb 26 09:02:38 crc kubenswrapper[4741]: I0226 09:02:38.733580 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerStarted","Data":"70eecf67d2dc89c6fe99ef4c5a920564c86834cb168e9740f8b8681a13b70f73"} Feb 26 09:02:40 crc kubenswrapper[4741]: I0226 09:02:40.761472 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerStarted","Data":"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28"} Feb 26 09:02:42 crc kubenswrapper[4741]: I0226 09:02:42.792519 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerID="05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28" exitCode=0 Feb 26 09:02:42 crc kubenswrapper[4741]: I0226 09:02:42.792600 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerDied","Data":"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28"} Feb 26 09:02:44 crc kubenswrapper[4741]: I0226 09:02:44.823417 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerStarted","Data":"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6"} Feb 26 09:02:44 crc kubenswrapper[4741]: I0226 09:02:44.858630 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-945g6" podStartSLOduration=2.10067037 podStartE2EDuration="7.858606191s" podCreationTimestamp="2026-02-26 09:02:37 +0000 UTC" firstStartedPulling="2026-02-26 09:02:38.737280929 +0000 UTC m=+2993.733218316" lastFinishedPulling="2026-02-26 09:02:44.49521674 +0000 UTC m=+2999.491154137" observedRunningTime="2026-02-26 09:02:44.847596399 +0000 UTC m=+2999.843533796" watchObservedRunningTime="2026-02-26 09:02:44.858606191 +0000 UTC m=+2999.854543578" Feb 26 09:02:47 crc kubenswrapper[4741]: I0226 09:02:47.518287 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:47 crc kubenswrapper[4741]: I0226 09:02:47.518996 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:48 crc kubenswrapper[4741]: I0226 09:02:48.582023 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-945g6" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="registry-server" probeResult="failure" output=< Feb 26 09:02:48 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:02:48 crc kubenswrapper[4741]: > Feb 26 09:02:55 crc kubenswrapper[4741]: I0226 09:02:55.149297 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:02:55 crc kubenswrapper[4741]: I0226 09:02:55.150395 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:02:57 crc kubenswrapper[4741]: I0226 09:02:57.602433 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:02:57 crc kubenswrapper[4741]: I0226 09:02:57.675177 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.048907 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.050132 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-945g6" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="registry-server" containerID="cri-o://7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6" gracePeriod=2 Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.759147 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.841432 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content\") pod \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.842463 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msdvr\" (UniqueName: \"kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr\") pod \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.842802 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities\") pod \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\" (UID: \"ef18cd91-a3a6-4c2b-8bfa-88430d243605\") " Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.843479 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities" (OuterVolumeSpecName: "utilities") pod "ef18cd91-a3a6-4c2b-8bfa-88430d243605" (UID: "ef18cd91-a3a6-4c2b-8bfa-88430d243605"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.844029 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.853028 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr" (OuterVolumeSpecName: "kube-api-access-msdvr") pod "ef18cd91-a3a6-4c2b-8bfa-88430d243605" (UID: "ef18cd91-a3a6-4c2b-8bfa-88430d243605"). InnerVolumeSpecName "kube-api-access-msdvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.906563 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef18cd91-a3a6-4c2b-8bfa-88430d243605" (UID: "ef18cd91-a3a6-4c2b-8bfa-88430d243605"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.946893 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef18cd91-a3a6-4c2b-8bfa-88430d243605-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:03:01 crc kubenswrapper[4741]: I0226 09:03:01.947277 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msdvr\" (UniqueName: \"kubernetes.io/projected/ef18cd91-a3a6-4c2b-8bfa-88430d243605-kube-api-access-msdvr\") on node \"crc\" DevicePath \"\"" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.065011 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerID="7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6" exitCode=0 Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.065092 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerDied","Data":"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6"} Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.065166 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945g6" event={"ID":"ef18cd91-a3a6-4c2b-8bfa-88430d243605","Type":"ContainerDied","Data":"70eecf67d2dc89c6fe99ef4c5a920564c86834cb168e9740f8b8681a13b70f73"} Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.065194 4741 scope.go:117] "RemoveContainer" containerID="7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.065218 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945g6" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.102997 4741 scope.go:117] "RemoveContainer" containerID="05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.124981 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.136165 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-945g6"] Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.137966 4741 scope.go:117] "RemoveContainer" containerID="bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.175489 4741 scope.go:117] "RemoveContainer" containerID="99fce8418122f68fa5d34db4c0f0de0c7fa60dac67a38683928f1b079ecc2cea" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.257244 4741 scope.go:117] "RemoveContainer" containerID="7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6" Feb 26 09:03:02 crc kubenswrapper[4741]: E0226 09:03:02.257643 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6\": container with ID starting with 7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6 not found: ID does not exist" containerID="7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.257720 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6"} err="failed to get container status \"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6\": rpc error: code = NotFound desc = could not find container \"7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6\": container with ID starting with 7a2dd5caf306ff6c8eb090a2be6a9735b319717581476f38a81c62829f2d85c6 not found: ID does not exist" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.257766 4741 scope.go:117] "RemoveContainer" containerID="05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28" Feb 26 09:03:02 crc kubenswrapper[4741]: E0226 09:03:02.258371 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28\": container with ID starting with 05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28 not found: ID does not exist" containerID="05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.258458 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28"} err="failed to get container status \"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28\": rpc error: code = NotFound desc = could not find container \"05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28\": container with ID starting with 05b2b3e46bb1ac8781d73d4b1369b06a7933066ac02f2cefad119dc81fd21f28 not found: ID does not exist" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.258505 4741 scope.go:117] "RemoveContainer" containerID="bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b" Feb 26 09:03:02 crc kubenswrapper[4741]: E0226 09:03:02.258907 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b\": container with ID starting with bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b not found: ID does not exist" containerID="bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b" Feb 26 09:03:02 crc kubenswrapper[4741]: I0226 09:03:02.258954 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b"} err="failed to get container status \"bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b\": rpc error: code = NotFound desc = could not find container \"bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b\": container with ID starting with bd57f2227e0f0ef532924f21d2439e8874fbe256f960b6df5b9837359e40cb5b not found: ID does not exist" Feb 26 09:03:03 crc kubenswrapper[4741]: I0226 09:03:03.807421 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" path="/var/lib/kubelet/pods/ef18cd91-a3a6-4c2b-8bfa-88430d243605/volumes" Feb 26 09:03:25 crc kubenswrapper[4741]: I0226 09:03:25.149405 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:03:25 crc kubenswrapper[4741]: I0226 09:03:25.150006 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.149290 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.150388 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.150476 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.152009 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.152160 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" gracePeriod=600 Feb 26 09:03:55 crc kubenswrapper[4741]: E0226 09:03:55.281496 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.842300 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" exitCode=0 Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.842394 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe"} Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.842458 4741 scope.go:117] "RemoveContainer" containerID="d11bb5b2dc6159523dcd00e9bf5cffe9cafd35e57361a657b4ae10ad61fbb5c4" Feb 26 09:03:55 crc kubenswrapper[4741]: I0226 09:03:55.845467 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:03:55 crc kubenswrapper[4741]: E0226 09:03:55.846699 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.160861 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534944-5zf48"] Feb 26 09:04:00 crc kubenswrapper[4741]: E0226 09:04:00.162577 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="extract-content" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.162595 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="extract-content" Feb 26 09:04:00 crc kubenswrapper[4741]: E0226 09:04:00.162636 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="registry-server" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.162644 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="registry-server" Feb 26 09:04:00 crc kubenswrapper[4741]: E0226 09:04:00.162682 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="extract-utilities" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.162692 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="extract-utilities" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.163004 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef18cd91-a3a6-4c2b-8bfa-88430d243605" containerName="registry-server" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.164235 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.167715 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.167764 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.167948 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.172398 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534944-5zf48"] Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.284217 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqlg\" (UniqueName: \"kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg\") pod \"auto-csr-approver-29534944-5zf48\" (UID: \"8e302707-410f-4651-83e2-8d9f7f80514b\") " pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.389214 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqlg\" (UniqueName: \"kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg\") pod \"auto-csr-approver-29534944-5zf48\" (UID: \"8e302707-410f-4651-83e2-8d9f7f80514b\") " pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.414369 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqlg\" (UniqueName: \"kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg\") pod \"auto-csr-approver-29534944-5zf48\" (UID: \"8e302707-410f-4651-83e2-8d9f7f80514b\") " pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:00 crc kubenswrapper[4741]: I0226 09:04:00.495246 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:01 crc kubenswrapper[4741]: I0226 09:04:01.070934 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534944-5zf48"] Feb 26 09:04:01 crc kubenswrapper[4741]: I0226 09:04:01.083617 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:04:01 crc kubenswrapper[4741]: I0226 09:04:01.938477 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534944-5zf48" event={"ID":"8e302707-410f-4651-83e2-8d9f7f80514b","Type":"ContainerStarted","Data":"209f10c9412ce109e880dec764b377548687ffd90ca45bb4c0448369c30e722d"} Feb 26 09:04:02 crc kubenswrapper[4741]: I0226 09:04:02.954620 4741 generic.go:334] "Generic (PLEG): container finished" podID="8e302707-410f-4651-83e2-8d9f7f80514b" containerID="6b8a34adcc9c525451b57cfe7171b2177acc35922402b8b6c1ee65e70f45e883" exitCode=0 Feb 26 09:04:02 crc kubenswrapper[4741]: I0226 09:04:02.956606 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534944-5zf48" event={"ID":"8e302707-410f-4651-83e2-8d9f7f80514b","Type":"ContainerDied","Data":"6b8a34adcc9c525451b57cfe7171b2177acc35922402b8b6c1ee65e70f45e883"} Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.485337 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.536306 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpqlg\" (UniqueName: \"kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg\") pod \"8e302707-410f-4651-83e2-8d9f7f80514b\" (UID: \"8e302707-410f-4651-83e2-8d9f7f80514b\") " Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.556173 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg" (OuterVolumeSpecName: "kube-api-access-rpqlg") pod "8e302707-410f-4651-83e2-8d9f7f80514b" (UID: "8e302707-410f-4651-83e2-8d9f7f80514b"). InnerVolumeSpecName "kube-api-access-rpqlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.641185 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpqlg\" (UniqueName: \"kubernetes.io/projected/8e302707-410f-4651-83e2-8d9f7f80514b-kube-api-access-rpqlg\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.984423 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534944-5zf48" event={"ID":"8e302707-410f-4651-83e2-8d9f7f80514b","Type":"ContainerDied","Data":"209f10c9412ce109e880dec764b377548687ffd90ca45bb4c0448369c30e722d"} Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.984497 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="209f10c9412ce109e880dec764b377548687ffd90ca45bb4c0448369c30e722d" Feb 26 09:04:04 crc kubenswrapper[4741]: I0226 09:04:04.984512 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534944-5zf48" Feb 26 09:04:05 crc kubenswrapper[4741]: I0226 09:04:05.595805 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534938-mkf4d"] Feb 26 09:04:05 crc kubenswrapper[4741]: I0226 09:04:05.611477 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534938-mkf4d"] Feb 26 09:04:05 crc kubenswrapper[4741]: I0226 09:04:05.814478 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990a6803-9d8a-4544-818f-0979ab2292bf" path="/var/lib/kubelet/pods/990a6803-9d8a-4544-818f-0979ab2292bf/volumes" Feb 26 09:04:06 crc kubenswrapper[4741]: I0226 09:04:06.787719 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:04:06 crc kubenswrapper[4741]: E0226 09:04:06.788682 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:04:21 crc kubenswrapper[4741]: I0226 09:04:21.788708 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:04:21 crc kubenswrapper[4741]: E0226 09:04:21.790010 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:04:36 crc kubenswrapper[4741]: I0226 09:04:36.788049 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:04:36 crc kubenswrapper[4741]: E0226 09:04:36.789648 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:04:47 crc kubenswrapper[4741]: I0226 09:04:47.787583 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:04:47 crc kubenswrapper[4741]: E0226 09:04:47.788798 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:04:51 crc kubenswrapper[4741]: I0226 09:04:51.667803 4741 generic.go:334] "Generic (PLEG): container finished" podID="ebe89e06-bf26-474e-8caf-f29a10b0fb24" containerID="7b80271e8dfb2932dfeb477765d1766ad94775b77b37b6c90b5090addaf8fab7" exitCode=0 Feb 26 09:04:51 crc kubenswrapper[4741]: I0226 09:04:51.667936 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" event={"ID":"ebe89e06-bf26-474e-8caf-f29a10b0fb24","Type":"ContainerDied","Data":"7b80271e8dfb2932dfeb477765d1766ad94775b77b37b6c90b5090addaf8fab7"} Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.259654 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.375540 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.376080 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.376340 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.376514 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.376691 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.376930 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.377279 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.377673 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp967\" (UniqueName: \"kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.378337 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.378699 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.378919 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam\") pod \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\" (UID: \"ebe89e06-bf26-474e-8caf-f29a10b0fb24\") " Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.383392 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.387616 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967" (OuterVolumeSpecName: "kube-api-access-xp967") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "kube-api-access-xp967". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.422904 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.426369 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory" (OuterVolumeSpecName: "inventory") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.430364 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.432246 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.434839 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.441920 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.448907 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.458315 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.461548 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "ebe89e06-bf26-474e-8caf-f29a10b0fb24" (UID: "ebe89e06-bf26-474e-8caf-f29a10b0fb24"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483238 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483278 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp967\" (UniqueName: \"kubernetes.io/projected/ebe89e06-bf26-474e-8caf-f29a10b0fb24-kube-api-access-xp967\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483309 4741 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483320 4741 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483330 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483340 4741 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483349 4741 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483358 4741 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483366 4741 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483395 4741 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.483404 4741 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe89e06-bf26-474e-8caf-f29a10b0fb24-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.701888 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" event={"ID":"ebe89e06-bf26-474e-8caf-f29a10b0fb24","Type":"ContainerDied","Data":"a35483e39275b6d9ce7667561957d6ba106df1bf82f65ca12c79b2f5426795bf"} Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.701962 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a35483e39275b6d9ce7667561957d6ba106df1bf82f65ca12c79b2f5426795bf" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.702099 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ch4rj" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.826956 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7"] Feb 26 09:04:53 crc kubenswrapper[4741]: E0226 09:04:53.827674 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe89e06-bf26-474e-8caf-f29a10b0fb24" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.827693 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe89e06-bf26-474e-8caf-f29a10b0fb24" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 09:04:53 crc kubenswrapper[4741]: E0226 09:04:53.827713 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e302707-410f-4651-83e2-8d9f7f80514b" containerName="oc" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.827719 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e302707-410f-4651-83e2-8d9f7f80514b" containerName="oc" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.827960 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e302707-410f-4651-83e2-8d9f7f80514b" containerName="oc" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.827985 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe89e06-bf26-474e-8caf-f29a10b0fb24" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.828877 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.831491 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.831716 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.832898 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.836389 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.840316 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 09:04:53 crc kubenswrapper[4741]: I0226 09:04:53.858525 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7"] Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003138 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003218 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvx2\" (UniqueName: \"kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003345 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003381 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003744 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.003944 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.004298 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108165 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108243 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvx2\" (UniqueName: \"kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108306 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108337 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108401 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108442 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.108506 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.112553 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.113311 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.114186 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.115471 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.116208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.117166 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.131020 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvx2\" (UniqueName: \"kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.152445 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.561185 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7"] Feb 26 09:04:54 crc kubenswrapper[4741]: I0226 09:04:54.716619 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" event={"ID":"a3cde25c-5220-45d5-8f47-db09f2db34e8","Type":"ContainerStarted","Data":"754a23e32585bd69dd79ffe13bd12d2b65185f1689c78cbc2162831ead838016"} Feb 26 09:04:56 crc kubenswrapper[4741]: I0226 09:04:56.750545 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" event={"ID":"a3cde25c-5220-45d5-8f47-db09f2db34e8","Type":"ContainerStarted","Data":"ffabe5378f49bf7f6bfa2b6f7985919b51bf826124171ac6b5f72ece677ac936"} Feb 26 09:04:56 crc kubenswrapper[4741]: I0226 09:04:56.787614 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" podStartSLOduration=2.987245625 podStartE2EDuration="3.787586917s" podCreationTimestamp="2026-02-26 09:04:53 +0000 UTC" firstStartedPulling="2026-02-26 09:04:54.587353573 +0000 UTC m=+3129.583290960" lastFinishedPulling="2026-02-26 09:04:55.387694825 +0000 UTC m=+3130.383632252" observedRunningTime="2026-02-26 09:04:56.769549106 +0000 UTC m=+3131.765486533" watchObservedRunningTime="2026-02-26 09:04:56.787586917 +0000 UTC m=+3131.783524304" Feb 26 09:04:59 crc kubenswrapper[4741]: I0226 09:04:59.788330 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:04:59 crc kubenswrapper[4741]: E0226 09:04:59.791198 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:05:02 crc kubenswrapper[4741]: I0226 09:05:02.346356 4741 scope.go:117] "RemoveContainer" containerID="65d32d25ea966cfe09e86534d1f6244d0ba5f9d8906bde127ec96f8513bd8330" Feb 26 09:05:13 crc kubenswrapper[4741]: I0226 09:05:13.787858 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:05:13 crc kubenswrapper[4741]: E0226 09:05:13.789571 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:05:25 crc kubenswrapper[4741]: I0226 09:05:25.798084 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:05:25 crc kubenswrapper[4741]: E0226 09:05:25.799616 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:05:37 crc kubenswrapper[4741]: I0226 09:05:37.788712 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:05:37 crc kubenswrapper[4741]: E0226 09:05:37.790733 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:05:48 crc kubenswrapper[4741]: I0226 09:05:48.787341 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:05:48 crc kubenswrapper[4741]: E0226 09:05:48.788819 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.161565 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534946-swrw8"] Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.164803 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.168673 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.170648 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.171202 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.173217 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534946-swrw8"] Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.246394 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6blb\" (UniqueName: \"kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb\") pod \"auto-csr-approver-29534946-swrw8\" (UID: \"70bc18d9-2505-4f9a-8c56-b8e2e45aa841\") " pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.349949 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6blb\" (UniqueName: \"kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb\") pod \"auto-csr-approver-29534946-swrw8\" (UID: \"70bc18d9-2505-4f9a-8c56-b8e2e45aa841\") " pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.376663 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6blb\" (UniqueName: \"kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb\") pod \"auto-csr-approver-29534946-swrw8\" (UID: \"70bc18d9-2505-4f9a-8c56-b8e2e45aa841\") " pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:00 crc kubenswrapper[4741]: I0226 09:06:00.499245 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:01 crc kubenswrapper[4741]: I0226 09:06:01.072664 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534946-swrw8"] Feb 26 09:06:01 crc kubenswrapper[4741]: I0226 09:06:01.720873 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534946-swrw8" event={"ID":"70bc18d9-2505-4f9a-8c56-b8e2e45aa841","Type":"ContainerStarted","Data":"4456bda8a29993438306a6a840e20fcaf8debd44183a913dcbf23644f748e39b"} Feb 26 09:06:02 crc kubenswrapper[4741]: I0226 09:06:02.740314 4741 generic.go:334] "Generic (PLEG): container finished" podID="70bc18d9-2505-4f9a-8c56-b8e2e45aa841" containerID="107f6bd5c9c71c37282720c18d4151a4d4271e3eccaac5ced7d7b07d292a0edd" exitCode=0 Feb 26 09:06:02 crc kubenswrapper[4741]: I0226 09:06:02.740436 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534946-swrw8" event={"ID":"70bc18d9-2505-4f9a-8c56-b8e2e45aa841","Type":"ContainerDied","Data":"107f6bd5c9c71c37282720c18d4151a4d4271e3eccaac5ced7d7b07d292a0edd"} Feb 26 09:06:02 crc kubenswrapper[4741]: I0226 09:06:02.787886 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:02 crc kubenswrapper[4741]: E0226 09:06:02.788548 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.157535 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.313961 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6blb\" (UniqueName: \"kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb\") pod \"70bc18d9-2505-4f9a-8c56-b8e2e45aa841\" (UID: \"70bc18d9-2505-4f9a-8c56-b8e2e45aa841\") " Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.327423 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb" (OuterVolumeSpecName: "kube-api-access-t6blb") pod "70bc18d9-2505-4f9a-8c56-b8e2e45aa841" (UID: "70bc18d9-2505-4f9a-8c56-b8e2e45aa841"). InnerVolumeSpecName "kube-api-access-t6blb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.418918 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6blb\" (UniqueName: \"kubernetes.io/projected/70bc18d9-2505-4f9a-8c56-b8e2e45aa841-kube-api-access-t6blb\") on node \"crc\" DevicePath \"\"" Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.764499 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534946-swrw8" event={"ID":"70bc18d9-2505-4f9a-8c56-b8e2e45aa841","Type":"ContainerDied","Data":"4456bda8a29993438306a6a840e20fcaf8debd44183a913dcbf23644f748e39b"} Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.764553 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4456bda8a29993438306a6a840e20fcaf8debd44183a913dcbf23644f748e39b" Feb 26 09:06:04 crc kubenswrapper[4741]: I0226 09:06:04.764577 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534946-swrw8" Feb 26 09:06:05 crc kubenswrapper[4741]: I0226 09:06:05.265593 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534940-t9xj2"] Feb 26 09:06:05 crc kubenswrapper[4741]: I0226 09:06:05.277008 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534940-t9xj2"] Feb 26 09:06:05 crc kubenswrapper[4741]: I0226 09:06:05.816378 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea04f7e-f160-4716-8a92-7ef5dc43a822" path="/var/lib/kubelet/pods/2ea04f7e-f160-4716-8a92-7ef5dc43a822/volumes" Feb 26 09:06:13 crc kubenswrapper[4741]: I0226 09:06:13.790689 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:13 crc kubenswrapper[4741]: E0226 09:06:13.791904 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:24 crc kubenswrapper[4741]: I0226 09:06:24.787534 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:24 crc kubenswrapper[4741]: E0226 09:06:24.788739 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:35 crc kubenswrapper[4741]: I0226 09:06:35.796683 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:35 crc kubenswrapper[4741]: E0226 09:06:35.797784 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:47 crc kubenswrapper[4741]: I0226 09:06:47.788604 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:47 crc kubenswrapper[4741]: E0226 09:06:47.789975 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:06:58 crc kubenswrapper[4741]: I0226 09:06:58.787198 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:06:58 crc kubenswrapper[4741]: E0226 09:06:58.788445 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:07:02 crc kubenswrapper[4741]: I0226 09:07:02.533947 4741 scope.go:117] "RemoveContainer" containerID="3ccc70103c9e4972bfaecb958c7f8e1ff9c18388652e9cd6836e78753fe12064" Feb 26 09:07:10 crc kubenswrapper[4741]: I0226 09:07:10.787936 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:07:10 crc kubenswrapper[4741]: E0226 09:07:10.789101 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:07:21 crc kubenswrapper[4741]: I0226 09:07:21.788786 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:07:21 crc kubenswrapper[4741]: E0226 09:07:21.789946 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:07:22 crc kubenswrapper[4741]: I0226 09:07:22.796008 4741 generic.go:334] "Generic (PLEG): container finished" podID="a3cde25c-5220-45d5-8f47-db09f2db34e8" containerID="ffabe5378f49bf7f6bfa2b6f7985919b51bf826124171ac6b5f72ece677ac936" exitCode=0 Feb 26 09:07:22 crc kubenswrapper[4741]: I0226 09:07:22.796081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" event={"ID":"a3cde25c-5220-45d5-8f47-db09f2db34e8","Type":"ContainerDied","Data":"ffabe5378f49bf7f6bfa2b6f7985919b51bf826124171ac6b5f72ece677ac936"} Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.459710 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.554289 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.554603 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.554772 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.554947 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxvx2\" (UniqueName: \"kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.555255 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.555380 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.555488 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2\") pod \"a3cde25c-5220-45d5-8f47-db09f2db34e8\" (UID: \"a3cde25c-5220-45d5-8f47-db09f2db34e8\") " Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.563662 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2" (OuterVolumeSpecName: "kube-api-access-nxvx2") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "kube-api-access-nxvx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.567020 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.615004 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.615648 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.623374 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.635025 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659770 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659812 4741 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659826 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxvx2\" (UniqueName: \"kubernetes.io/projected/a3cde25c-5220-45d5-8f47-db09f2db34e8-kube-api-access-nxvx2\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659836 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659845 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.659856 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.667314 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory" (OuterVolumeSpecName: "inventory") pod "a3cde25c-5220-45d5-8f47-db09f2db34e8" (UID: "a3cde25c-5220-45d5-8f47-db09f2db34e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.762078 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3cde25c-5220-45d5-8f47-db09f2db34e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.831735 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" event={"ID":"a3cde25c-5220-45d5-8f47-db09f2db34e8","Type":"ContainerDied","Data":"754a23e32585bd69dd79ffe13bd12d2b65185f1689c78cbc2162831ead838016"} Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.831804 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754a23e32585bd69dd79ffe13bd12d2b65185f1689c78cbc2162831ead838016" Feb 26 09:07:24 crc kubenswrapper[4741]: I0226 09:07:24.831906 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7" Feb 26 09:07:24 crc kubenswrapper[4741]: E0226 09:07:24.993533 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3cde25c_5220_45d5_8f47_db09f2db34e8.slice\": RecentStats: unable to find data in memory cache]" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.024433 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv"] Feb 26 09:07:25 crc kubenswrapper[4741]: E0226 09:07:25.025210 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cde25c-5220-45d5-8f47-db09f2db34e8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.025231 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cde25c-5220-45d5-8f47-db09f2db34e8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 09:07:25 crc kubenswrapper[4741]: E0226 09:07:25.025260 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70bc18d9-2505-4f9a-8c56-b8e2e45aa841" containerName="oc" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.025267 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="70bc18d9-2505-4f9a-8c56-b8e2e45aa841" containerName="oc" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.025529 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cde25c-5220-45d5-8f47-db09f2db34e8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.025550 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="70bc18d9-2505-4f9a-8c56-b8e2e45aa841" containerName="oc" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.026571 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.036898 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.037446 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.037811 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.037948 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.038049 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.039710 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv"] Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174528 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174626 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174661 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174746 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174770 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174802 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48fx\" (UniqueName: \"kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.174858 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278267 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48fx\" (UniqueName: \"kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278416 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278527 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278612 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278655 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.278842 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.283958 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.285295 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.285746 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.286702 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.288454 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.289660 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.306440 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48fx\" (UniqueName: \"kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:25 crc kubenswrapper[4741]: I0226 09:07:25.353632 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:07:26 crc kubenswrapper[4741]: I0226 09:07:26.001327 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv"] Feb 26 09:07:26 crc kubenswrapper[4741]: I0226 09:07:26.897836 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" event={"ID":"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a","Type":"ContainerStarted","Data":"1e0f62e7de1ad2d84b3854ec80a0f9d0a2c0495233083ad512dda06f325c33e3"} Feb 26 09:07:26 crc kubenswrapper[4741]: I0226 09:07:26.898421 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" event={"ID":"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a","Type":"ContainerStarted","Data":"8838f08aed719cf93323fb2ef6e737db8c861672c6e56b9b217dfc92a6745728"} Feb 26 09:07:26 crc kubenswrapper[4741]: I0226 09:07:26.938773 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" podStartSLOduration=2.540947592 podStartE2EDuration="2.938749681s" podCreationTimestamp="2026-02-26 09:07:24 +0000 UTC" firstStartedPulling="2026-02-26 09:07:25.996803463 +0000 UTC m=+3280.992740860" lastFinishedPulling="2026-02-26 09:07:26.394605572 +0000 UTC m=+3281.390542949" observedRunningTime="2026-02-26 09:07:26.920790531 +0000 UTC m=+3281.916727918" watchObservedRunningTime="2026-02-26 09:07:26.938749681 +0000 UTC m=+3281.934687068" Feb 26 09:07:32 crc kubenswrapper[4741]: I0226 09:07:32.788639 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:07:32 crc kubenswrapper[4741]: E0226 09:07:32.790299 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:07:43 crc kubenswrapper[4741]: I0226 09:07:43.789613 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:07:43 crc kubenswrapper[4741]: E0226 09:07:43.790383 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:07:58 crc kubenswrapper[4741]: I0226 09:07:58.787715 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:07:58 crc kubenswrapper[4741]: E0226 09:07:58.788791 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.184289 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534948-5vx2k"] Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.186981 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.194055 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.194814 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.196301 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.197493 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534948-5vx2k"] Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.250837 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29gr\" (UniqueName: \"kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr\") pod \"auto-csr-approver-29534948-5vx2k\" (UID: \"65e93514-0dd4-45c4-a1d7-4582cfc406d3\") " pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.353469 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29gr\" (UniqueName: \"kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr\") pod \"auto-csr-approver-29534948-5vx2k\" (UID: \"65e93514-0dd4-45c4-a1d7-4582cfc406d3\") " pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.379839 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29gr\" (UniqueName: \"kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr\") pod \"auto-csr-approver-29534948-5vx2k\" (UID: \"65e93514-0dd4-45c4-a1d7-4582cfc406d3\") " pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:00 crc kubenswrapper[4741]: I0226 09:08:00.513363 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:01 crc kubenswrapper[4741]: I0226 09:08:01.061421 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534948-5vx2k"] Feb 26 09:08:01 crc kubenswrapper[4741]: I0226 09:08:01.411687 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" event={"ID":"65e93514-0dd4-45c4-a1d7-4582cfc406d3","Type":"ContainerStarted","Data":"ae6d3e151c35594e049a45be1140296d88ef82fa8ae5da2deb154d148bdb6bbe"} Feb 26 09:08:02 crc kubenswrapper[4741]: I0226 09:08:02.439828 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" event={"ID":"65e93514-0dd4-45c4-a1d7-4582cfc406d3","Type":"ContainerStarted","Data":"b509dec673e4de38c92a061fd775caf51367a4867485abf0ea2e053138e08393"} Feb 26 09:08:02 crc kubenswrapper[4741]: I0226 09:08:02.475567 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" podStartSLOduration=1.547573618 podStartE2EDuration="2.475535989s" podCreationTimestamp="2026-02-26 09:08:00 +0000 UTC" firstStartedPulling="2026-02-26 09:08:01.065225982 +0000 UTC m=+3316.061163409" lastFinishedPulling="2026-02-26 09:08:01.993188393 +0000 UTC m=+3316.989125780" observedRunningTime="2026-02-26 09:08:02.462485749 +0000 UTC m=+3317.458423166" watchObservedRunningTime="2026-02-26 09:08:02.475535989 +0000 UTC m=+3317.471473376" Feb 26 09:08:03 crc kubenswrapper[4741]: I0226 09:08:03.457413 4741 generic.go:334] "Generic (PLEG): container finished" podID="65e93514-0dd4-45c4-a1d7-4582cfc406d3" containerID="b509dec673e4de38c92a061fd775caf51367a4867485abf0ea2e053138e08393" exitCode=0 Feb 26 09:08:03 crc kubenswrapper[4741]: I0226 09:08:03.457476 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" event={"ID":"65e93514-0dd4-45c4-a1d7-4582cfc406d3","Type":"ContainerDied","Data":"b509dec673e4de38c92a061fd775caf51367a4867485abf0ea2e053138e08393"} Feb 26 09:08:04 crc kubenswrapper[4741]: I0226 09:08:04.922493 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.020260 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r29gr\" (UniqueName: \"kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr\") pod \"65e93514-0dd4-45c4-a1d7-4582cfc406d3\" (UID: \"65e93514-0dd4-45c4-a1d7-4582cfc406d3\") " Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.028846 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr" (OuterVolumeSpecName: "kube-api-access-r29gr") pod "65e93514-0dd4-45c4-a1d7-4582cfc406d3" (UID: "65e93514-0dd4-45c4-a1d7-4582cfc406d3"). InnerVolumeSpecName "kube-api-access-r29gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.125148 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r29gr\" (UniqueName: \"kubernetes.io/projected/65e93514-0dd4-45c4-a1d7-4582cfc406d3-kube-api-access-r29gr\") on node \"crc\" DevicePath \"\"" Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.482766 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" event={"ID":"65e93514-0dd4-45c4-a1d7-4582cfc406d3","Type":"ContainerDied","Data":"ae6d3e151c35594e049a45be1140296d88ef82fa8ae5da2deb154d148bdb6bbe"} Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.483344 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6d3e151c35594e049a45be1140296d88ef82fa8ae5da2deb154d148bdb6bbe" Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.482952 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534948-5vx2k" Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.559300 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534942-ht4jp"] Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.571866 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534942-ht4jp"] Feb 26 09:08:05 crc kubenswrapper[4741]: I0226 09:08:05.851923 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5322e6-e663-4e7f-aa5c-a54647d23a4f" path="/var/lib/kubelet/pods/7f5322e6-e663-4e7f-aa5c-a54647d23a4f/volumes" Feb 26 09:08:11 crc kubenswrapper[4741]: I0226 09:08:11.788098 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:08:11 crc kubenswrapper[4741]: E0226 09:08:11.789041 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:08:25 crc kubenswrapper[4741]: I0226 09:08:25.838191 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:08:25 crc kubenswrapper[4741]: E0226 09:08:25.840819 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:08:40 crc kubenswrapper[4741]: I0226 09:08:40.788560 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:08:40 crc kubenswrapper[4741]: E0226 09:08:40.789519 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:08:54 crc kubenswrapper[4741]: I0226 09:08:54.787587 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:08:54 crc kubenswrapper[4741]: E0226 09:08:54.789221 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:09:02 crc kubenswrapper[4741]: I0226 09:09:02.668313 4741 scope.go:117] "RemoveContainer" containerID="0f1055a955b9432d6597c6b2e5c255ed3e18957b440f45b52f5030187ebe7433" Feb 26 09:09:05 crc kubenswrapper[4741]: I0226 09:09:05.813821 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:09:06 crc kubenswrapper[4741]: I0226 09:09:06.325471 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f"} Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.606252 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:11 crc kubenswrapper[4741]: E0226 09:09:11.608001 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e93514-0dd4-45c4-a1d7-4582cfc406d3" containerName="oc" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.608025 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e93514-0dd4-45c4-a1d7-4582cfc406d3" containerName="oc" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.608348 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e93514-0dd4-45c4-a1d7-4582cfc406d3" containerName="oc" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.610950 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.628434 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.759824 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.759912 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6stw\" (UniqueName: \"kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.759969 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.863308 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.863417 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6stw\" (UniqueName: \"kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.863520 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.864156 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.865052 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.889820 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6stw\" (UniqueName: \"kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw\") pod \"community-operators-mtpsh\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:11 crc kubenswrapper[4741]: I0226 09:09:11.952352 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:12 crc kubenswrapper[4741]: I0226 09:09:12.561677 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:13 crc kubenswrapper[4741]: I0226 09:09:13.422635 4741 generic.go:334] "Generic (PLEG): container finished" podID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerID="1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0" exitCode=0 Feb 26 09:09:13 crc kubenswrapper[4741]: I0226 09:09:13.422768 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerDied","Data":"1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0"} Feb 26 09:09:13 crc kubenswrapper[4741]: I0226 09:09:13.423154 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerStarted","Data":"55ef043af3c8b9809e47ec5f8e1af0bf225a35372feec13c71c84e1f65d90177"} Feb 26 09:09:13 crc kubenswrapper[4741]: I0226 09:09:13.428297 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:09:15 crc kubenswrapper[4741]: I0226 09:09:15.454649 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerStarted","Data":"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043"} Feb 26 09:09:20 crc kubenswrapper[4741]: I0226 09:09:20.535161 4741 generic.go:334] "Generic (PLEG): container finished" podID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerID="111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043" exitCode=0 Feb 26 09:09:20 crc kubenswrapper[4741]: I0226 09:09:20.535710 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerDied","Data":"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043"} Feb 26 09:09:22 crc kubenswrapper[4741]: I0226 09:09:22.568165 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerStarted","Data":"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1"} Feb 26 09:09:22 crc kubenswrapper[4741]: I0226 09:09:22.570630 4741 generic.go:334] "Generic (PLEG): container finished" podID="7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" containerID="1e0f62e7de1ad2d84b3854ec80a0f9d0a2c0495233083ad512dda06f325c33e3" exitCode=0 Feb 26 09:09:22 crc kubenswrapper[4741]: I0226 09:09:22.570709 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" event={"ID":"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a","Type":"ContainerDied","Data":"1e0f62e7de1ad2d84b3854ec80a0f9d0a2c0495233083ad512dda06f325c33e3"} Feb 26 09:09:22 crc kubenswrapper[4741]: I0226 09:09:22.601237 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mtpsh" podStartSLOduration=3.723363849 podStartE2EDuration="11.601213455s" podCreationTimestamp="2026-02-26 09:09:11 +0000 UTC" firstStartedPulling="2026-02-26 09:09:13.427006056 +0000 UTC m=+3388.422943463" lastFinishedPulling="2026-02-26 09:09:21.304855682 +0000 UTC m=+3396.300793069" observedRunningTime="2026-02-26 09:09:22.589234645 +0000 UTC m=+3397.585172052" watchObservedRunningTime="2026-02-26 09:09:22.601213455 +0000 UTC m=+3397.597150842" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.193135 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.274060 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.274639 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.274866 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.274924 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.275003 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p48fx\" (UniqueName: \"kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.275026 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.275153 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam\") pod \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\" (UID: \"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a\") " Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.285445 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx" (OuterVolumeSpecName: "kube-api-access-p48fx") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "kube-api-access-p48fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.288326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.318855 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.324458 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.333077 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.333701 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory" (OuterVolumeSpecName: "inventory") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.337192 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" (UID: "7b5e677e-1d6b-4c7f-925e-ac5f65ced91a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381839 4741 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381890 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381905 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381918 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p48fx\" (UniqueName: \"kubernetes.io/projected/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-kube-api-access-p48fx\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381933 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381946 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.381960 4741 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b5e677e-1d6b-4c7f-925e-ac5f65ced91a-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.601556 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" event={"ID":"7b5e677e-1d6b-4c7f-925e-ac5f65ced91a","Type":"ContainerDied","Data":"8838f08aed719cf93323fb2ef6e737db8c861672c6e56b9b217dfc92a6745728"} Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.601613 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8838f08aed719cf93323fb2ef6e737db8c861672c6e56b9b217dfc92a6745728" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.601692 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.872939 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx"] Feb 26 09:09:24 crc kubenswrapper[4741]: E0226 09:09:24.873587 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.873614 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.873917 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5e677e-1d6b-4c7f-925e-ac5f65ced91a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.874997 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.876857 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-96hc2" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.877702 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.877853 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.878024 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.880959 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 09:09:24 crc kubenswrapper[4741]: I0226 09:09:24.889207 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx"] Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.012557 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.013351 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52qg\" (UniqueName: \"kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.013534 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.013817 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.013864 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.116610 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p52qg\" (UniqueName: \"kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.116728 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.116859 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.116896 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.117008 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.124494 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.124659 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.125595 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.143505 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.144235 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p52qg\" (UniqueName: \"kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg\") pod \"logging-edpm-deployment-openstack-edpm-ipam-njkxx\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.195033 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:25 crc kubenswrapper[4741]: I0226 09:09:25.822652 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx"] Feb 26 09:09:25 crc kubenswrapper[4741]: W0226 09:09:25.825040 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4fadac_0554_4e65_a18c_b96b1bf9cb1a.slice/crio-6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f WatchSource:0}: Error finding container 6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f: Status 404 returned error can't find the container with id 6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f Feb 26 09:09:26 crc kubenswrapper[4741]: I0226 09:09:26.632924 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" event={"ID":"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a","Type":"ContainerStarted","Data":"6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f"} Feb 26 09:09:27 crc kubenswrapper[4741]: I0226 09:09:27.667016 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" event={"ID":"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a","Type":"ContainerStarted","Data":"da6e531dbe4a99c315191a3952763f86e9d8b1d28dbac58ad07a041eaa0bf58c"} Feb 26 09:09:27 crc kubenswrapper[4741]: I0226 09:09:27.708843 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" podStartSLOduration=3.031211413 podStartE2EDuration="3.708813104s" podCreationTimestamp="2026-02-26 09:09:24 +0000 UTC" firstStartedPulling="2026-02-26 09:09:25.828712072 +0000 UTC m=+3400.824649459" lastFinishedPulling="2026-02-26 09:09:26.506313763 +0000 UTC m=+3401.502251150" observedRunningTime="2026-02-26 09:09:27.695078784 +0000 UTC m=+3402.691016181" watchObservedRunningTime="2026-02-26 09:09:27.708813104 +0000 UTC m=+3402.704750501" Feb 26 09:09:31 crc kubenswrapper[4741]: I0226 09:09:31.953873 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:31 crc kubenswrapper[4741]: I0226 09:09:31.954382 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:33 crc kubenswrapper[4741]: I0226 09:09:33.025744 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mtpsh" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="registry-server" probeResult="failure" output=< Feb 26 09:09:33 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:09:33 crc kubenswrapper[4741]: > Feb 26 09:09:42 crc kubenswrapper[4741]: I0226 09:09:42.033066 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:42 crc kubenswrapper[4741]: I0226 09:09:42.095317 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:42 crc kubenswrapper[4741]: I0226 09:09:42.830697 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:42 crc kubenswrapper[4741]: I0226 09:09:42.906822 4741 generic.go:334] "Generic (PLEG): container finished" podID="8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" containerID="da6e531dbe4a99c315191a3952763f86e9d8b1d28dbac58ad07a041eaa0bf58c" exitCode=0 Feb 26 09:09:42 crc kubenswrapper[4741]: I0226 09:09:42.907313 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" event={"ID":"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a","Type":"ContainerDied","Data":"da6e531dbe4a99c315191a3952763f86e9d8b1d28dbac58ad07a041eaa0bf58c"} Feb 26 09:09:43 crc kubenswrapper[4741]: I0226 09:09:43.919567 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mtpsh" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="registry-server" containerID="cri-o://3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1" gracePeriod=2 Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.648131 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.664155 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.737354 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory\") pod \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.737470 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1\") pod \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.737563 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p52qg\" (UniqueName: \"kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg\") pod \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.737759 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam\") pod \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.737967 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0\") pod \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\" (UID: \"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.759662 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg" (OuterVolumeSpecName: "kube-api-access-p52qg") pod "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" (UID: "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a"). InnerVolumeSpecName "kube-api-access-p52qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.810028 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" (UID: "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.832300 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" (UID: "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.832523 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory" (OuterVolumeSpecName: "inventory") pod "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" (UID: "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.841344 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities\") pod \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.842154 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities" (OuterVolumeSpecName: "utilities") pod "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" (UID: "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.842272 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content\") pod \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.842334 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6stw\" (UniqueName: \"kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw\") pod \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\" (UID: \"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b\") " Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.843360 4741 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.843423 4741 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.843440 4741 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.843456 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p52qg\" (UniqueName: \"kubernetes.io/projected/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-kube-api-access-p52qg\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.843471 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.848678 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw" (OuterVolumeSpecName: "kube-api-access-h6stw") pod "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" (UID: "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b"). InnerVolumeSpecName "kube-api-access-h6stw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.852308 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" (UID: "8e4fadac-0554-4e65-a18c-b96b1bf9cb1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.894228 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" (UID: "03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.935551 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" event={"ID":"8e4fadac-0554-4e65-a18c-b96b1bf9cb1a","Type":"ContainerDied","Data":"6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f"} Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.937229 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf4350e691182314ac1a6a1df8cda7ffb43d5efb97e251527b90ff184c4b24f" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.935606 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-njkxx" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.942080 4741 generic.go:334] "Generic (PLEG): container finished" podID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerID="3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1" exitCode=0 Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.942166 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerDied","Data":"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1"} Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.942423 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtpsh" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.942524 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtpsh" event={"ID":"03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b","Type":"ContainerDied","Data":"55ef043af3c8b9809e47ec5f8e1af0bf225a35372feec13c71c84e1f65d90177"} Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.942559 4741 scope.go:117] "RemoveContainer" containerID="3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.946690 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.946736 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6stw\" (UniqueName: \"kubernetes.io/projected/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b-kube-api-access-h6stw\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.946758 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e4fadac-0554-4e65-a18c-b96b1bf9cb1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 09:09:44 crc kubenswrapper[4741]: I0226 09:09:44.994522 4741 scope.go:117] "RemoveContainer" containerID="111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.023472 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.029789 4741 scope.go:117] "RemoveContainer" containerID="1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.040429 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mtpsh"] Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.060838 4741 scope.go:117] "RemoveContainer" containerID="3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1" Feb 26 09:09:45 crc kubenswrapper[4741]: E0226 09:09:45.062302 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1\": container with ID starting with 3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1 not found: ID does not exist" containerID="3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.062342 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1"} err="failed to get container status \"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1\": rpc error: code = NotFound desc = could not find container \"3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1\": container with ID starting with 3f4d20f79a6b9283b3fe57a7c265b97a5431145b37bbc86b0c81c8e65ab925c1 not found: ID does not exist" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.062391 4741 scope.go:117] "RemoveContainer" containerID="111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043" Feb 26 09:09:45 crc kubenswrapper[4741]: E0226 09:09:45.062898 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043\": container with ID starting with 111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043 not found: ID does not exist" containerID="111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.063011 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043"} err="failed to get container status \"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043\": rpc error: code = NotFound desc = could not find container \"111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043\": container with ID starting with 111e7d9a45f63a1535fcc9704c336876394722f98fd5036bf76ed10804ace043 not found: ID does not exist" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.063103 4741 scope.go:117] "RemoveContainer" containerID="1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0" Feb 26 09:09:45 crc kubenswrapper[4741]: E0226 09:09:45.063538 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0\": container with ID starting with 1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0 not found: ID does not exist" containerID="1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.063632 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0"} err="failed to get container status \"1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0\": rpc error: code = NotFound desc = could not find container \"1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0\": container with ID starting with 1004bc75b7d0265d00569ec8b6480308e2c80c6633793c4322faf0542305cba0 not found: ID does not exist" Feb 26 09:09:45 crc kubenswrapper[4741]: I0226 09:09:45.808544 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" path="/var/lib/kubelet/pods/03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b/volumes" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.162201 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534950-zh98j"] Feb 26 09:10:00 crc kubenswrapper[4741]: E0226 09:10:00.163698 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="registry-server" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.163718 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="registry-server" Feb 26 09:10:00 crc kubenswrapper[4741]: E0226 09:10:00.163755 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="extract-content" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.163765 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="extract-content" Feb 26 09:10:00 crc kubenswrapper[4741]: E0226 09:10:00.163791 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.163803 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 09:10:00 crc kubenswrapper[4741]: E0226 09:10:00.163816 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="extract-utilities" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.163829 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="extract-utilities" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.164228 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4fadac-0554-4e65-a18c-b96b1bf9cb1a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.164262 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d4f1e9-da4f-46e4-849e-9e1c1bc55e1b" containerName="registry-server" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.165539 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.168788 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.169759 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.169974 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.176402 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534950-zh98j"] Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.240091 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8f2\" (UniqueName: \"kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2\") pod \"auto-csr-approver-29534950-zh98j\" (UID: \"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95\") " pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.344368 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8f2\" (UniqueName: \"kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2\") pod \"auto-csr-approver-29534950-zh98j\" (UID: \"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95\") " pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.368588 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8f2\" (UniqueName: \"kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2\") pod \"auto-csr-approver-29534950-zh98j\" (UID: \"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95\") " pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:00 crc kubenswrapper[4741]: I0226 09:10:00.498224 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:01 crc kubenswrapper[4741]: I0226 09:10:01.037530 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534950-zh98j"] Feb 26 09:10:01 crc kubenswrapper[4741]: I0226 09:10:01.151790 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534950-zh98j" event={"ID":"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95","Type":"ContainerStarted","Data":"f0572a7597f602986c85eac58f3c600607e4cadfaff1a370842a95c0e32fbf93"} Feb 26 09:10:03 crc kubenswrapper[4741]: I0226 09:10:03.186564 4741 generic.go:334] "Generic (PLEG): container finished" podID="5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" containerID="8928970caa60aefa656245b57c5c91cc2d72fa090e6e9a484d3eafc6e892e17d" exitCode=0 Feb 26 09:10:03 crc kubenswrapper[4741]: I0226 09:10:03.187433 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534950-zh98j" event={"ID":"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95","Type":"ContainerDied","Data":"8928970caa60aefa656245b57c5c91cc2d72fa090e6e9a484d3eafc6e892e17d"} Feb 26 09:10:04 crc kubenswrapper[4741]: I0226 09:10:04.815657 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:04 crc kubenswrapper[4741]: I0226 09:10:04.911925 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c8f2\" (UniqueName: \"kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2\") pod \"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95\" (UID: \"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95\") " Feb 26 09:10:04 crc kubenswrapper[4741]: I0226 09:10:04.918102 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2" (OuterVolumeSpecName: "kube-api-access-7c8f2") pod "5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" (UID: "5acc5b7a-c5ff-4967-baf8-69cf3ac36b95"). InnerVolumeSpecName "kube-api-access-7c8f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.016014 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c8f2\" (UniqueName: \"kubernetes.io/projected/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95-kube-api-access-7c8f2\") on node \"crc\" DevicePath \"\"" Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.239726 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534950-zh98j" event={"ID":"5acc5b7a-c5ff-4967-baf8-69cf3ac36b95","Type":"ContainerDied","Data":"f0572a7597f602986c85eac58f3c600607e4cadfaff1a370842a95c0e32fbf93"} Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.240059 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0572a7597f602986c85eac58f3c600607e4cadfaff1a370842a95c0e32fbf93" Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.239819 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534950-zh98j" Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.910376 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534944-5zf48"] Feb 26 09:10:05 crc kubenswrapper[4741]: I0226 09:10:05.925169 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534944-5zf48"] Feb 26 09:10:07 crc kubenswrapper[4741]: I0226 09:10:07.807097 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e302707-410f-4651-83e2-8d9f7f80514b" path="/var/lib/kubelet/pods/8e302707-410f-4651-83e2-8d9f7f80514b/volumes" Feb 26 09:11:02 crc kubenswrapper[4741]: I0226 09:11:02.892267 4741 scope.go:117] "RemoveContainer" containerID="6b8a34adcc9c525451b57cfe7171b2177acc35922402b8b6c1ee65e70f45e883" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.335974 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:17 crc kubenswrapper[4741]: E0226 09:11:17.337244 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" containerName="oc" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.337263 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" containerName="oc" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.337591 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" containerName="oc" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.339562 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.350009 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.446601 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl822\" (UniqueName: \"kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.447871 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.447977 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.551279 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl822\" (UniqueName: \"kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.551544 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.551594 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.552201 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.552235 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.578433 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl822\" (UniqueName: \"kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822\") pod \"redhat-operators-z7tpz\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:17 crc kubenswrapper[4741]: I0226 09:11:17.726358 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:18 crc kubenswrapper[4741]: I0226 09:11:18.271086 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:18 crc kubenswrapper[4741]: I0226 09:11:18.454496 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerStarted","Data":"75aebdf5d47cc257cae4a4f3c83ddf92e329eecc48508c26166d89b3d57d3cc7"} Feb 26 09:11:19 crc kubenswrapper[4741]: I0226 09:11:19.477280 4741 generic.go:334] "Generic (PLEG): container finished" podID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerID="bcdca5d21df814bf2291d01ac0f965d2dc3959c1d03ade13d1679698a3f5b67f" exitCode=0 Feb 26 09:11:19 crc kubenswrapper[4741]: I0226 09:11:19.477585 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerDied","Data":"bcdca5d21df814bf2291d01ac0f965d2dc3959c1d03ade13d1679698a3f5b67f"} Feb 26 09:11:21 crc kubenswrapper[4741]: I0226 09:11:21.515176 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerStarted","Data":"9ccf7342717cec0d8ce5af0e47d680dde6706527a9e7244680d02932f647a668"} Feb 26 09:11:25 crc kubenswrapper[4741]: I0226 09:11:25.149752 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:11:25 crc kubenswrapper[4741]: I0226 09:11:25.150171 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:11:28 crc kubenswrapper[4741]: I0226 09:11:28.601658 4741 generic.go:334] "Generic (PLEG): container finished" podID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerID="9ccf7342717cec0d8ce5af0e47d680dde6706527a9e7244680d02932f647a668" exitCode=0 Feb 26 09:11:28 crc kubenswrapper[4741]: I0226 09:11:28.601740 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerDied","Data":"9ccf7342717cec0d8ce5af0e47d680dde6706527a9e7244680d02932f647a668"} Feb 26 09:11:30 crc kubenswrapper[4741]: I0226 09:11:30.636861 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerStarted","Data":"760257579ab00a7b0f9f8d5d757b0d8c3a311f63493b061bebee151e61ff56af"} Feb 26 09:11:30 crc kubenswrapper[4741]: I0226 09:11:30.676174 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z7tpz" podStartSLOduration=3.513376839 podStartE2EDuration="13.676141156s" podCreationTimestamp="2026-02-26 09:11:17 +0000 UTC" firstStartedPulling="2026-02-26 09:11:19.479540576 +0000 UTC m=+3514.475477963" lastFinishedPulling="2026-02-26 09:11:29.642304893 +0000 UTC m=+3524.638242280" observedRunningTime="2026-02-26 09:11:30.664141495 +0000 UTC m=+3525.660078902" watchObservedRunningTime="2026-02-26 09:11:30.676141156 +0000 UTC m=+3525.672078573" Feb 26 09:11:32 crc kubenswrapper[4741]: E0226 09:11:32.533837 4741 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.166:47964->38.102.83.166:36527: read tcp 38.102.83.166:47964->38.102.83.166:36527: read: connection reset by peer Feb 26 09:11:37 crc kubenswrapper[4741]: I0226 09:11:37.727384 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:37 crc kubenswrapper[4741]: I0226 09:11:37.728059 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:37 crc kubenswrapper[4741]: I0226 09:11:37.812465 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:38 crc kubenswrapper[4741]: I0226 09:11:38.852582 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:41 crc kubenswrapper[4741]: I0226 09:11:41.478148 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:41 crc kubenswrapper[4741]: I0226 09:11:41.479175 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z7tpz" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="registry-server" containerID="cri-o://760257579ab00a7b0f9f8d5d757b0d8c3a311f63493b061bebee151e61ff56af" gracePeriod=2 Feb 26 09:11:41 crc kubenswrapper[4741]: I0226 09:11:41.804516 4741 generic.go:334] "Generic (PLEG): container finished" podID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerID="760257579ab00a7b0f9f8d5d757b0d8c3a311f63493b061bebee151e61ff56af" exitCode=0 Feb 26 09:11:41 crc kubenswrapper[4741]: I0226 09:11:41.816027 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerDied","Data":"760257579ab00a7b0f9f8d5d757b0d8c3a311f63493b061bebee151e61ff56af"} Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.013334 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.190448 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl822\" (UniqueName: \"kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822\") pod \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.190839 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content\") pod \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.191169 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities\") pod \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\" (UID: \"149a8410-44a4-4b2e-97d3-f3d18fe04a97\") " Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.193122 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities" (OuterVolumeSpecName: "utilities") pod "149a8410-44a4-4b2e-97d3-f3d18fe04a97" (UID: "149a8410-44a4-4b2e-97d3-f3d18fe04a97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.205274 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822" (OuterVolumeSpecName: "kube-api-access-fl822") pod "149a8410-44a4-4b2e-97d3-f3d18fe04a97" (UID: "149a8410-44a4-4b2e-97d3-f3d18fe04a97"). InnerVolumeSpecName "kube-api-access-fl822". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.299290 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.299340 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl822\" (UniqueName: \"kubernetes.io/projected/149a8410-44a4-4b2e-97d3-f3d18fe04a97-kube-api-access-fl822\") on node \"crc\" DevicePath \"\"" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.336665 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149a8410-44a4-4b2e-97d3-f3d18fe04a97" (UID: "149a8410-44a4-4b2e-97d3-f3d18fe04a97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.403463 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149a8410-44a4-4b2e-97d3-f3d18fe04a97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.842797 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7tpz" event={"ID":"149a8410-44a4-4b2e-97d3-f3d18fe04a97","Type":"ContainerDied","Data":"75aebdf5d47cc257cae4a4f3c83ddf92e329eecc48508c26166d89b3d57d3cc7"} Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.843431 4741 scope.go:117] "RemoveContainer" containerID="760257579ab00a7b0f9f8d5d757b0d8c3a311f63493b061bebee151e61ff56af" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.842905 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7tpz" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.878606 4741 scope.go:117] "RemoveContainer" containerID="9ccf7342717cec0d8ce5af0e47d680dde6706527a9e7244680d02932f647a668" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.895642 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.910126 4741 scope.go:117] "RemoveContainer" containerID="bcdca5d21df814bf2291d01ac0f965d2dc3959c1d03ade13d1679698a3f5b67f" Feb 26 09:11:43 crc kubenswrapper[4741]: I0226 09:11:43.912871 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z7tpz"] Feb 26 09:11:45 crc kubenswrapper[4741]: I0226 09:11:45.811144 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" path="/var/lib/kubelet/pods/149a8410-44a4-4b2e-97d3-f3d18fe04a97/volumes" Feb 26 09:11:55 crc kubenswrapper[4741]: I0226 09:11:55.148886 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:11:55 crc kubenswrapper[4741]: I0226 09:11:55.149547 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.156535 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534952-dmkts"] Feb 26 09:12:00 crc kubenswrapper[4741]: E0226 09:12:00.157705 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="extract-utilities" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.157723 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="extract-utilities" Feb 26 09:12:00 crc kubenswrapper[4741]: E0226 09:12:00.157762 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="extract-content" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.157769 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="extract-content" Feb 26 09:12:00 crc kubenswrapper[4741]: E0226 09:12:00.157799 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="registry-server" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.157807 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="registry-server" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.158078 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="149a8410-44a4-4b2e-97d3-f3d18fe04a97" containerName="registry-server" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.159417 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.164093 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.164382 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.164622 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.176418 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534952-dmkts"] Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.321651 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frhdg\" (UniqueName: \"kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg\") pod \"auto-csr-approver-29534952-dmkts\" (UID: \"b5676ab0-d942-4f77-9788-cc6cee938d79\") " pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.426444 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frhdg\" (UniqueName: \"kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg\") pod \"auto-csr-approver-29534952-dmkts\" (UID: \"b5676ab0-d942-4f77-9788-cc6cee938d79\") " pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.480844 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frhdg\" (UniqueName: \"kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg\") pod \"auto-csr-approver-29534952-dmkts\" (UID: \"b5676ab0-d942-4f77-9788-cc6cee938d79\") " pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:00 crc kubenswrapper[4741]: I0226 09:12:00.487779 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:01 crc kubenswrapper[4741]: I0226 09:12:01.031921 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534952-dmkts"] Feb 26 09:12:01 crc kubenswrapper[4741]: I0226 09:12:01.067821 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534952-dmkts" event={"ID":"b5676ab0-d942-4f77-9788-cc6cee938d79","Type":"ContainerStarted","Data":"42c5a9bbe4c46939800eeca710603722067969eddd559daf3459840eb49d2022"} Feb 26 09:12:03 crc kubenswrapper[4741]: I0226 09:12:03.105938 4741 generic.go:334] "Generic (PLEG): container finished" podID="b5676ab0-d942-4f77-9788-cc6cee938d79" containerID="c2c3ffacc0f3bf640fc4d732fa95502ba63b5c62ade1c4ca9eb926b963df947f" exitCode=0 Feb 26 09:12:03 crc kubenswrapper[4741]: I0226 09:12:03.106072 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534952-dmkts" event={"ID":"b5676ab0-d942-4f77-9788-cc6cee938d79","Type":"ContainerDied","Data":"c2c3ffacc0f3bf640fc4d732fa95502ba63b5c62ade1c4ca9eb926b963df947f"} Feb 26 09:12:04 crc kubenswrapper[4741]: I0226 09:12:04.585815 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:04 crc kubenswrapper[4741]: I0226 09:12:04.780923 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frhdg\" (UniqueName: \"kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg\") pod \"b5676ab0-d942-4f77-9788-cc6cee938d79\" (UID: \"b5676ab0-d942-4f77-9788-cc6cee938d79\") " Feb 26 09:12:04 crc kubenswrapper[4741]: I0226 09:12:04.787701 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg" (OuterVolumeSpecName: "kube-api-access-frhdg") pod "b5676ab0-d942-4f77-9788-cc6cee938d79" (UID: "b5676ab0-d942-4f77-9788-cc6cee938d79"). InnerVolumeSpecName "kube-api-access-frhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:12:04 crc kubenswrapper[4741]: I0226 09:12:04.884882 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frhdg\" (UniqueName: \"kubernetes.io/projected/b5676ab0-d942-4f77-9788-cc6cee938d79-kube-api-access-frhdg\") on node \"crc\" DevicePath \"\"" Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.135310 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534952-dmkts" event={"ID":"b5676ab0-d942-4f77-9788-cc6cee938d79","Type":"ContainerDied","Data":"42c5a9bbe4c46939800eeca710603722067969eddd559daf3459840eb49d2022"} Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.135358 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c5a9bbe4c46939800eeca710603722067969eddd559daf3459840eb49d2022" Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.135428 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534952-dmkts" Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.690661 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534946-swrw8"] Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.702097 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534946-swrw8"] Feb 26 09:12:05 crc kubenswrapper[4741]: I0226 09:12:05.812330 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70bc18d9-2505-4f9a-8c56-b8e2e45aa841" path="/var/lib/kubelet/pods/70bc18d9-2505-4f9a-8c56-b8e2e45aa841/volumes" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.452175 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:19 crc kubenswrapper[4741]: E0226 09:12:19.453708 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5676ab0-d942-4f77-9788-cc6cee938d79" containerName="oc" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.453734 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5676ab0-d942-4f77-9788-cc6cee938d79" containerName="oc" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.454046 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5676ab0-d942-4f77-9788-cc6cee938d79" containerName="oc" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.456546 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.505611 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.593244 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.593311 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njptk\" (UniqueName: \"kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.601633 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.705406 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.705456 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njptk\" (UniqueName: \"kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.705534 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.706152 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.706448 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.727460 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njptk\" (UniqueName: \"kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk\") pod \"redhat-marketplace-jc6l6\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:19 crc kubenswrapper[4741]: I0226 09:12:19.791870 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:20 crc kubenswrapper[4741]: I0226 09:12:20.898030 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:21 crc kubenswrapper[4741]: I0226 09:12:21.472826 4741 generic.go:334] "Generic (PLEG): container finished" podID="3af0b59e-3742-4820-ab61-96e14d59d330" containerID="3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94" exitCode=0 Feb 26 09:12:21 crc kubenswrapper[4741]: I0226 09:12:21.473063 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerDied","Data":"3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94"} Feb 26 09:12:21 crc kubenswrapper[4741]: I0226 09:12:21.473186 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerStarted","Data":"bde69ee65b2c474981cf3ead5c6feb29148e9fc1a1e41aca81b5cb79ae4dc16a"} Feb 26 09:12:23 crc kubenswrapper[4741]: I0226 09:12:23.517132 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerStarted","Data":"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65"} Feb 26 09:12:24 crc kubenswrapper[4741]: I0226 09:12:24.533455 4741 generic.go:334] "Generic (PLEG): container finished" podID="3af0b59e-3742-4820-ab61-96e14d59d330" containerID="5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65" exitCode=0 Feb 26 09:12:24 crc kubenswrapper[4741]: I0226 09:12:24.533520 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerDied","Data":"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65"} Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.149694 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.150075 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.150147 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.151382 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.151449 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f" gracePeriod=600 Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.549090 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerStarted","Data":"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313"} Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.553282 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f" exitCode=0 Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.553317 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f"} Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.553345 4741 scope.go:117] "RemoveContainer" containerID="1f31c2c1405cbf0905e1f286f3376c54078fb43060a22b8db4946414e4e103fe" Feb 26 09:12:25 crc kubenswrapper[4741]: I0226 09:12:25.582128 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jc6l6" podStartSLOduration=3.083250451 podStartE2EDuration="6.582085938s" podCreationTimestamp="2026-02-26 09:12:19 +0000 UTC" firstStartedPulling="2026-02-26 09:12:21.479527716 +0000 UTC m=+3576.475465103" lastFinishedPulling="2026-02-26 09:12:24.978363203 +0000 UTC m=+3579.974300590" observedRunningTime="2026-02-26 09:12:25.574008829 +0000 UTC m=+3580.569946226" watchObservedRunningTime="2026-02-26 09:12:25.582085938 +0000 UTC m=+3580.578023325" Feb 26 09:12:26 crc kubenswrapper[4741]: I0226 09:12:26.567655 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf"} Feb 26 09:12:29 crc kubenswrapper[4741]: I0226 09:12:29.808782 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:29 crc kubenswrapper[4741]: I0226 09:12:29.811620 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:29 crc kubenswrapper[4741]: I0226 09:12:29.862352 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:30 crc kubenswrapper[4741]: I0226 09:12:30.685305 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:30 crc kubenswrapper[4741]: I0226 09:12:30.753972 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:32 crc kubenswrapper[4741]: I0226 09:12:32.649323 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jc6l6" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="registry-server" containerID="cri-o://be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313" gracePeriod=2 Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.253737 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.329868 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities\") pod \"3af0b59e-3742-4820-ab61-96e14d59d330\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.330129 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content\") pod \"3af0b59e-3742-4820-ab61-96e14d59d330\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.330231 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njptk\" (UniqueName: \"kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk\") pod \"3af0b59e-3742-4820-ab61-96e14d59d330\" (UID: \"3af0b59e-3742-4820-ab61-96e14d59d330\") " Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.331649 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities" (OuterVolumeSpecName: "utilities") pod "3af0b59e-3742-4820-ab61-96e14d59d330" (UID: "3af0b59e-3742-4820-ab61-96e14d59d330"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.340543 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk" (OuterVolumeSpecName: "kube-api-access-njptk") pod "3af0b59e-3742-4820-ab61-96e14d59d330" (UID: "3af0b59e-3742-4820-ab61-96e14d59d330"). InnerVolumeSpecName "kube-api-access-njptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.361581 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af0b59e-3742-4820-ab61-96e14d59d330" (UID: "3af0b59e-3742-4820-ab61-96e14d59d330"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.433961 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.434006 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njptk\" (UniqueName: \"kubernetes.io/projected/3af0b59e-3742-4820-ab61-96e14d59d330-kube-api-access-njptk\") on node \"crc\" DevicePath \"\"" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.434020 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af0b59e-3742-4820-ab61-96e14d59d330-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.671425 4741 generic.go:334] "Generic (PLEG): container finished" podID="3af0b59e-3742-4820-ab61-96e14d59d330" containerID="be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313" exitCode=0 Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.671493 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerDied","Data":"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313"} Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.671537 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6l6" event={"ID":"3af0b59e-3742-4820-ab61-96e14d59d330","Type":"ContainerDied","Data":"bde69ee65b2c474981cf3ead5c6feb29148e9fc1a1e41aca81b5cb79ae4dc16a"} Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.671587 4741 scope.go:117] "RemoveContainer" containerID="be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.671600 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6l6" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.709487 4741 scope.go:117] "RemoveContainer" containerID="5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.742468 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.758941 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6l6"] Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.765188 4741 scope.go:117] "RemoveContainer" containerID="3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.804174 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" path="/var/lib/kubelet/pods/3af0b59e-3742-4820-ab61-96e14d59d330/volumes" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.850092 4741 scope.go:117] "RemoveContainer" containerID="be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313" Feb 26 09:12:33 crc kubenswrapper[4741]: E0226 09:12:33.850755 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313\": container with ID starting with be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313 not found: ID does not exist" containerID="be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.850823 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313"} err="failed to get container status \"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313\": rpc error: code = NotFound desc = could not find container \"be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313\": container with ID starting with be699db224159087635f5c3da8ee466a1f2ea36b6fd56bcb988fad0df5b51313 not found: ID does not exist" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.850867 4741 scope.go:117] "RemoveContainer" containerID="5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65" Feb 26 09:12:33 crc kubenswrapper[4741]: E0226 09:12:33.851337 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65\": container with ID starting with 5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65 not found: ID does not exist" containerID="5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.851375 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65"} err="failed to get container status \"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65\": rpc error: code = NotFound desc = could not find container \"5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65\": container with ID starting with 5ac51b6daf6474e3e967abfb187ab0f2c88df1273534ab2429da31f0b9083e65 not found: ID does not exist" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.851405 4741 scope.go:117] "RemoveContainer" containerID="3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94" Feb 26 09:12:33 crc kubenswrapper[4741]: E0226 09:12:33.851684 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94\": container with ID starting with 3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94 not found: ID does not exist" containerID="3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94" Feb 26 09:12:33 crc kubenswrapper[4741]: I0226 09:12:33.851722 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94"} err="failed to get container status \"3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94\": rpc error: code = NotFound desc = could not find container \"3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94\": container with ID starting with 3ebd5114295d8cb91f64730fd1b7c569ea961bb750993394a100d82a61f28b94 not found: ID does not exist" Feb 26 09:13:03 crc kubenswrapper[4741]: I0226 09:13:03.056081 4741 scope.go:117] "RemoveContainer" containerID="107f6bd5c9c71c37282720c18d4151a4d4271e3eccaac5ced7d7b07d292a0edd" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.978508 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:13:38 crc kubenswrapper[4741]: E0226 09:13:38.979682 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="extract-utilities" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.979697 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="extract-utilities" Feb 26 09:13:38 crc kubenswrapper[4741]: E0226 09:13:38.979712 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="extract-content" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.979718 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="extract-content" Feb 26 09:13:38 crc kubenswrapper[4741]: E0226 09:13:38.979780 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="registry-server" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.979789 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="registry-server" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.980067 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af0b59e-3742-4820-ab61-96e14d59d330" containerName="registry-server" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.982267 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.999301 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.999521 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:38 crc kubenswrapper[4741]: I0226 09:13:38.999722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztt2j\" (UniqueName: \"kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.016206 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.101831 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.101945 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztt2j\" (UniqueName: \"kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.102093 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.102328 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.102646 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.125043 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztt2j\" (UniqueName: \"kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j\") pod \"certified-operators-qn899\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.307638 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:39 crc kubenswrapper[4741]: I0226 09:13:39.880347 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:13:40 crc kubenswrapper[4741]: I0226 09:13:40.148009 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerStarted","Data":"e0edacffa973065680cbc6dca9149f597e85da1407505aa995a8ed25142708a7"} Feb 26 09:13:41 crc kubenswrapper[4741]: I0226 09:13:41.164954 4741 generic.go:334] "Generic (PLEG): container finished" podID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerID="9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6" exitCode=0 Feb 26 09:13:41 crc kubenswrapper[4741]: I0226 09:13:41.165866 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerDied","Data":"9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6"} Feb 26 09:13:42 crc kubenswrapper[4741]: I0226 09:13:42.186005 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerStarted","Data":"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125"} Feb 26 09:13:45 crc kubenswrapper[4741]: I0226 09:13:45.240347 4741 generic.go:334] "Generic (PLEG): container finished" podID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerID="435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125" exitCode=0 Feb 26 09:13:45 crc kubenswrapper[4741]: I0226 09:13:45.240393 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerDied","Data":"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125"} Feb 26 09:13:46 crc kubenswrapper[4741]: I0226 09:13:46.255665 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerStarted","Data":"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936"} Feb 26 09:13:46 crc kubenswrapper[4741]: I0226 09:13:46.289416 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qn899" podStartSLOduration=3.7537003909999997 podStartE2EDuration="8.289391948s" podCreationTimestamp="2026-02-26 09:13:38 +0000 UTC" firstStartedPulling="2026-02-26 09:13:41.17389822 +0000 UTC m=+3656.169835627" lastFinishedPulling="2026-02-26 09:13:45.709589797 +0000 UTC m=+3660.705527184" observedRunningTime="2026-02-26 09:13:46.277887192 +0000 UTC m=+3661.273824589" watchObservedRunningTime="2026-02-26 09:13:46.289391948 +0000 UTC m=+3661.285329335" Feb 26 09:13:49 crc kubenswrapper[4741]: I0226 09:13:49.311196 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:49 crc kubenswrapper[4741]: I0226 09:13:49.311899 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:50 crc kubenswrapper[4741]: I0226 09:13:50.370334 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qn899" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="registry-server" probeResult="failure" output=< Feb 26 09:13:50 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:13:50 crc kubenswrapper[4741]: > Feb 26 09:13:59 crc kubenswrapper[4741]: I0226 09:13:59.379770 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:59 crc kubenswrapper[4741]: I0226 09:13:59.453077 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:13:59 crc kubenswrapper[4741]: I0226 09:13:59.634549 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.174103 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534954-f5jq9"] Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.185933 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.196181 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.196594 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.204319 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.204880 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534954-f5jq9"] Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.210663 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjk9s\" (UniqueName: \"kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s\") pod \"auto-csr-approver-29534954-f5jq9\" (UID: \"3ebe9765-3be0-4143-87c1-e1fcc10ce481\") " pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.320271 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjk9s\" (UniqueName: \"kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s\") pod \"auto-csr-approver-29534954-f5jq9\" (UID: \"3ebe9765-3be0-4143-87c1-e1fcc10ce481\") " pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.357812 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjk9s\" (UniqueName: \"kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s\") pod \"auto-csr-approver-29534954-f5jq9\" (UID: \"3ebe9765-3be0-4143-87c1-e1fcc10ce481\") " pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.444191 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qn899" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="registry-server" containerID="cri-o://231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936" gracePeriod=2 Feb 26 09:14:00 crc kubenswrapper[4741]: I0226 09:14:00.513240 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.097469 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:14:01 crc kubenswrapper[4741]: W0226 09:14:01.124009 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ebe9765_3be0_4143_87c1_e1fcc10ce481.slice/crio-d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113 WatchSource:0}: Error finding container d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113: Status 404 returned error can't find the container with id d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113 Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.134939 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534954-f5jq9"] Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.251058 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities\") pod \"88b7a645-48ca-413c-bb3b-a318b1a610ee\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.251377 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztt2j\" (UniqueName: \"kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j\") pod \"88b7a645-48ca-413c-bb3b-a318b1a610ee\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.251424 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content\") pod \"88b7a645-48ca-413c-bb3b-a318b1a610ee\" (UID: \"88b7a645-48ca-413c-bb3b-a318b1a610ee\") " Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.252212 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities" (OuterVolumeSpecName: "utilities") pod "88b7a645-48ca-413c-bb3b-a318b1a610ee" (UID: "88b7a645-48ca-413c-bb3b-a318b1a610ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.259523 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j" (OuterVolumeSpecName: "kube-api-access-ztt2j") pod "88b7a645-48ca-413c-bb3b-a318b1a610ee" (UID: "88b7a645-48ca-413c-bb3b-a318b1a610ee"). InnerVolumeSpecName "kube-api-access-ztt2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.303296 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88b7a645-48ca-413c-bb3b-a318b1a610ee" (UID: "88b7a645-48ca-413c-bb3b-a318b1a610ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.357010 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztt2j\" (UniqueName: \"kubernetes.io/projected/88b7a645-48ca-413c-bb3b-a318b1a610ee-kube-api-access-ztt2j\") on node \"crc\" DevicePath \"\"" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.357075 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.357088 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b7a645-48ca-413c-bb3b-a318b1a610ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.459622 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" event={"ID":"3ebe9765-3be0-4143-87c1-e1fcc10ce481","Type":"ContainerStarted","Data":"d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113"} Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.463643 4741 generic.go:334] "Generic (PLEG): container finished" podID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerID="231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936" exitCode=0 Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.463695 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerDied","Data":"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936"} Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.463734 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn899" event={"ID":"88b7a645-48ca-413c-bb3b-a318b1a610ee","Type":"ContainerDied","Data":"e0edacffa973065680cbc6dca9149f597e85da1407505aa995a8ed25142708a7"} Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.463756 4741 scope.go:117] "RemoveContainer" containerID="231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.463756 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn899" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.503429 4741 scope.go:117] "RemoveContainer" containerID="435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.512924 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.526646 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qn899"] Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.532347 4741 scope.go:117] "RemoveContainer" containerID="9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.585042 4741 scope.go:117] "RemoveContainer" containerID="231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936" Feb 26 09:14:01 crc kubenswrapper[4741]: E0226 09:14:01.585815 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936\": container with ID starting with 231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936 not found: ID does not exist" containerID="231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.585864 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936"} err="failed to get container status \"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936\": rpc error: code = NotFound desc = could not find container \"231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936\": container with ID starting with 231c3d22ceee6b9b20da6de263a1b5cd4fe74b1958a0a4220fd09a7f279b8936 not found: ID does not exist" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.585909 4741 scope.go:117] "RemoveContainer" containerID="435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125" Feb 26 09:14:01 crc kubenswrapper[4741]: E0226 09:14:01.586666 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125\": container with ID starting with 435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125 not found: ID does not exist" containerID="435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.586700 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125"} err="failed to get container status \"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125\": rpc error: code = NotFound desc = could not find container \"435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125\": container with ID starting with 435dcec33fcb17e17af132a07af1dcd43f4ceb72b1962258a9db76b776863125 not found: ID does not exist" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.586740 4741 scope.go:117] "RemoveContainer" containerID="9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6" Feb 26 09:14:01 crc kubenswrapper[4741]: E0226 09:14:01.587231 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6\": container with ID starting with 9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6 not found: ID does not exist" containerID="9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.587253 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6"} err="failed to get container status \"9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6\": rpc error: code = NotFound desc = could not find container \"9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6\": container with ID starting with 9b4340dc63d9425debec31311907c45cdd66a77f528dc8421b0cb7b6ae61d3a6 not found: ID does not exist" Feb 26 09:14:01 crc kubenswrapper[4741]: I0226 09:14:01.803596 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" path="/var/lib/kubelet/pods/88b7a645-48ca-413c-bb3b-a318b1a610ee/volumes" Feb 26 09:14:02 crc kubenswrapper[4741]: I0226 09:14:02.479811 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" event={"ID":"3ebe9765-3be0-4143-87c1-e1fcc10ce481","Type":"ContainerStarted","Data":"9e2b948c4bab0fb4492f4d960eeadacb92912f309d41bbf32a30a9a7d0dfac0e"} Feb 26 09:14:02 crc kubenswrapper[4741]: I0226 09:14:02.512970 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" podStartSLOduration=1.688045744 podStartE2EDuration="2.512943933s" podCreationTimestamp="2026-02-26 09:14:00 +0000 UTC" firstStartedPulling="2026-02-26 09:14:01.127691834 +0000 UTC m=+3676.123629221" lastFinishedPulling="2026-02-26 09:14:01.952590023 +0000 UTC m=+3676.948527410" observedRunningTime="2026-02-26 09:14:02.499896012 +0000 UTC m=+3677.495833409" watchObservedRunningTime="2026-02-26 09:14:02.512943933 +0000 UTC m=+3677.508881320" Feb 26 09:14:03 crc kubenswrapper[4741]: I0226 09:14:03.498859 4741 generic.go:334] "Generic (PLEG): container finished" podID="3ebe9765-3be0-4143-87c1-e1fcc10ce481" containerID="9e2b948c4bab0fb4492f4d960eeadacb92912f309d41bbf32a30a9a7d0dfac0e" exitCode=0 Feb 26 09:14:03 crc kubenswrapper[4741]: I0226 09:14:03.498967 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" event={"ID":"3ebe9765-3be0-4143-87c1-e1fcc10ce481","Type":"ContainerDied","Data":"9e2b948c4bab0fb4492f4d960eeadacb92912f309d41bbf32a30a9a7d0dfac0e"} Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.022207 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.075172 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjk9s\" (UniqueName: \"kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s\") pod \"3ebe9765-3be0-4143-87c1-e1fcc10ce481\" (UID: \"3ebe9765-3be0-4143-87c1-e1fcc10ce481\") " Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.085010 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s" (OuterVolumeSpecName: "kube-api-access-pjk9s") pod "3ebe9765-3be0-4143-87c1-e1fcc10ce481" (UID: "3ebe9765-3be0-4143-87c1-e1fcc10ce481"). InnerVolumeSpecName "kube-api-access-pjk9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.180815 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjk9s\" (UniqueName: \"kubernetes.io/projected/3ebe9765-3be0-4143-87c1-e1fcc10ce481-kube-api-access-pjk9s\") on node \"crc\" DevicePath \"\"" Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.531821 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" event={"ID":"3ebe9765-3be0-4143-87c1-e1fcc10ce481","Type":"ContainerDied","Data":"d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113"} Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.531870 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534954-f5jq9" Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.531891 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d519d793ec4cf910f09b7e10fe414d1af8795596c992863641b38ffa035ed113" Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.620059 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534948-5vx2k"] Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.634933 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534948-5vx2k"] Feb 26 09:14:05 crc kubenswrapper[4741]: I0226 09:14:05.806384 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e93514-0dd4-45c4-a1d7-4582cfc406d3" path="/var/lib/kubelet/pods/65e93514-0dd4-45c4-a1d7-4582cfc406d3/volumes" Feb 26 09:14:25 crc kubenswrapper[4741]: I0226 09:14:25.149877 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:14:25 crc kubenswrapper[4741]: I0226 09:14:25.151251 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:14:55 crc kubenswrapper[4741]: I0226 09:14:55.149309 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:14:55 crc kubenswrapper[4741]: I0226 09:14:55.149787 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.158573 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g"] Feb 26 09:15:00 crc kubenswrapper[4741]: E0226 09:15:00.159738 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="registry-server" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.159753 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="registry-server" Feb 26 09:15:00 crc kubenswrapper[4741]: E0226 09:15:00.159769 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="extract-content" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.159775 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="extract-content" Feb 26 09:15:00 crc kubenswrapper[4741]: E0226 09:15:00.159807 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="extract-utilities" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.159814 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="extract-utilities" Feb 26 09:15:00 crc kubenswrapper[4741]: E0226 09:15:00.159825 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebe9765-3be0-4143-87c1-e1fcc10ce481" containerName="oc" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.159830 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebe9765-3be0-4143-87c1-e1fcc10ce481" containerName="oc" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.160085 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ebe9765-3be0-4143-87c1-e1fcc10ce481" containerName="oc" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.160201 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b7a645-48ca-413c-bb3b-a318b1a610ee" containerName="registry-server" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.161251 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.163639 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.163939 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.171270 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g"] Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.241213 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.241506 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5lh\" (UniqueName: \"kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.241570 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.344674 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn5lh\" (UniqueName: \"kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.344744 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.344924 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.345906 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.352959 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.364358 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn5lh\" (UniqueName: \"kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh\") pod \"collect-profiles-29534955-pqn9g\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.490902 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:00 crc kubenswrapper[4741]: I0226 09:15:00.989044 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g"] Feb 26 09:15:01 crc kubenswrapper[4741]: I0226 09:15:01.886759 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" event={"ID":"0c025a68-cfff-44a8-a399-e667f3e9dd80","Type":"ContainerStarted","Data":"d460837293d517675c1be9db8cec3745e9c4fbba89f4642462b923069cc05b82"} Feb 26 09:15:01 crc kubenswrapper[4741]: I0226 09:15:01.887293 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" event={"ID":"0c025a68-cfff-44a8-a399-e667f3e9dd80","Type":"ContainerStarted","Data":"d429b1b0fd5cdc3b1b754fee5ed91513b203c4cf385eb308f2c1b52de7ab88a9"} Feb 26 09:15:01 crc kubenswrapper[4741]: I0226 09:15:01.924086 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" podStartSLOduration=1.924057671 podStartE2EDuration="1.924057671s" podCreationTimestamp="2026-02-26 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:15:01.904617069 +0000 UTC m=+3736.900554476" watchObservedRunningTime="2026-02-26 09:15:01.924057671 +0000 UTC m=+3736.919995078" Feb 26 09:15:02 crc kubenswrapper[4741]: I0226 09:15:02.903282 4741 generic.go:334] "Generic (PLEG): container finished" podID="0c025a68-cfff-44a8-a399-e667f3e9dd80" containerID="d460837293d517675c1be9db8cec3745e9c4fbba89f4642462b923069cc05b82" exitCode=0 Feb 26 09:15:02 crc kubenswrapper[4741]: I0226 09:15:02.903395 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" event={"ID":"0c025a68-cfff-44a8-a399-e667f3e9dd80","Type":"ContainerDied","Data":"d460837293d517675c1be9db8cec3745e9c4fbba89f4642462b923069cc05b82"} Feb 26 09:15:03 crc kubenswrapper[4741]: I0226 09:15:03.243733 4741 scope.go:117] "RemoveContainer" containerID="b509dec673e4de38c92a061fd775caf51367a4867485abf0ea2e053138e08393" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.389791 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.521634 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn5lh\" (UniqueName: \"kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh\") pod \"0c025a68-cfff-44a8-a399-e667f3e9dd80\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.521778 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume\") pod \"0c025a68-cfff-44a8-a399-e667f3e9dd80\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.521825 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume\") pod \"0c025a68-cfff-44a8-a399-e667f3e9dd80\" (UID: \"0c025a68-cfff-44a8-a399-e667f3e9dd80\") " Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.522464 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c025a68-cfff-44a8-a399-e667f3e9dd80" (UID: "0c025a68-cfff-44a8-a399-e667f3e9dd80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.523404 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c025a68-cfff-44a8-a399-e667f3e9dd80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.529068 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh" (OuterVolumeSpecName: "kube-api-access-xn5lh") pod "0c025a68-cfff-44a8-a399-e667f3e9dd80" (UID: "0c025a68-cfff-44a8-a399-e667f3e9dd80"). InnerVolumeSpecName "kube-api-access-xn5lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.529402 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c025a68-cfff-44a8-a399-e667f3e9dd80" (UID: "0c025a68-cfff-44a8-a399-e667f3e9dd80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.626616 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn5lh\" (UniqueName: \"kubernetes.io/projected/0c025a68-cfff-44a8-a399-e667f3e9dd80-kube-api-access-xn5lh\") on node \"crc\" DevicePath \"\"" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.626681 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c025a68-cfff-44a8-a399-e667f3e9dd80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.935680 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" event={"ID":"0c025a68-cfff-44a8-a399-e667f3e9dd80","Type":"ContainerDied","Data":"d429b1b0fd5cdc3b1b754fee5ed91513b203c4cf385eb308f2c1b52de7ab88a9"} Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.935745 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d429b1b0fd5cdc3b1b754fee5ed91513b203c4cf385eb308f2c1b52de7ab88a9" Feb 26 09:15:04 crc kubenswrapper[4741]: I0226 09:15:04.935864 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g" Feb 26 09:15:05 crc kubenswrapper[4741]: I0226 09:15:05.005911 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf"] Feb 26 09:15:05 crc kubenswrapper[4741]: I0226 09:15:05.024217 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534910-g46zf"] Feb 26 09:15:05 crc kubenswrapper[4741]: I0226 09:15:05.832973 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8470d1f-2910-449c-96e1-e8dbe81c8c4d" path="/var/lib/kubelet/pods/e8470d1f-2910-449c-96e1-e8dbe81c8c4d/volumes" Feb 26 09:15:25 crc kubenswrapper[4741]: I0226 09:15:25.149764 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:15:25 crc kubenswrapper[4741]: I0226 09:15:25.150573 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:15:25 crc kubenswrapper[4741]: I0226 09:15:25.150665 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:15:25 crc kubenswrapper[4741]: I0226 09:15:25.152463 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:15:25 crc kubenswrapper[4741]: I0226 09:15:25.152587 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" gracePeriod=600 Feb 26 09:15:25 crc kubenswrapper[4741]: E0226 09:15:25.300829 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:15:26 crc kubenswrapper[4741]: I0226 09:15:26.201216 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" exitCode=0 Feb 26 09:15:26 crc kubenswrapper[4741]: I0226 09:15:26.201279 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf"} Feb 26 09:15:26 crc kubenswrapper[4741]: I0226 09:15:26.201338 4741 scope.go:117] "RemoveContainer" containerID="e5fa0e51372b3ec95f08ed1d9dd2b0c86b06f97041ffb79317f46eb7b4873e0f" Feb 26 09:15:26 crc kubenswrapper[4741]: I0226 09:15:26.202518 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:15:26 crc kubenswrapper[4741]: E0226 09:15:26.203066 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:15:40 crc kubenswrapper[4741]: I0226 09:15:40.790759 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:15:40 crc kubenswrapper[4741]: E0226 09:15:40.791945 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:15:52 crc kubenswrapper[4741]: I0226 09:15:52.788145 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:15:52 crc kubenswrapper[4741]: E0226 09:15:52.789488 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:15:54 crc kubenswrapper[4741]: E0226 09:15:54.351308 4741 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.166:50882->38.102.83.166:36527: write tcp 38.102.83.166:50882->38.102.83.166:36527: write: broken pipe Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.158220 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534956-sqp25"] Feb 26 09:16:00 crc kubenswrapper[4741]: E0226 09:16:00.159418 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c025a68-cfff-44a8-a399-e667f3e9dd80" containerName="collect-profiles" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.159437 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c025a68-cfff-44a8-a399-e667f3e9dd80" containerName="collect-profiles" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.159736 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c025a68-cfff-44a8-a399-e667f3e9dd80" containerName="collect-profiles" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.160880 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.163800 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.164564 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.165762 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.173811 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534956-sqp25"] Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.305044 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4fbq\" (UniqueName: \"kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq\") pod \"auto-csr-approver-29534956-sqp25\" (UID: \"68ed6025-5e29-4978-9df4-feacc84d75f9\") " pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.407682 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4fbq\" (UniqueName: \"kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq\") pod \"auto-csr-approver-29534956-sqp25\" (UID: \"68ed6025-5e29-4978-9df4-feacc84d75f9\") " pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.438365 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4fbq\" (UniqueName: \"kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq\") pod \"auto-csr-approver-29534956-sqp25\" (UID: \"68ed6025-5e29-4978-9df4-feacc84d75f9\") " pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:00 crc kubenswrapper[4741]: I0226 09:16:00.483721 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:01 crc kubenswrapper[4741]: I0226 09:16:01.017314 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534956-sqp25"] Feb 26 09:16:01 crc kubenswrapper[4741]: I0226 09:16:01.018975 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:16:01 crc kubenswrapper[4741]: I0226 09:16:01.706440 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534956-sqp25" event={"ID":"68ed6025-5e29-4978-9df4-feacc84d75f9","Type":"ContainerStarted","Data":"bf8b841eef5baa4bfc5bf5b438ca1c859411f96a4ba4f87cd0be20091b517b0d"} Feb 26 09:16:02 crc kubenswrapper[4741]: I0226 09:16:02.722914 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534956-sqp25" event={"ID":"68ed6025-5e29-4978-9df4-feacc84d75f9","Type":"ContainerStarted","Data":"52db6189d470d8110de4efd82104984873621986a64b16c023322f96a367488d"} Feb 26 09:16:02 crc kubenswrapper[4741]: I0226 09:16:02.747405 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534956-sqp25" podStartSLOduration=1.867530913 podStartE2EDuration="2.747380344s" podCreationTimestamp="2026-02-26 09:16:00 +0000 UTC" firstStartedPulling="2026-02-26 09:16:01.018481787 +0000 UTC m=+3796.014419184" lastFinishedPulling="2026-02-26 09:16:01.898331218 +0000 UTC m=+3796.894268615" observedRunningTime="2026-02-26 09:16:02.736524466 +0000 UTC m=+3797.732461853" watchObservedRunningTime="2026-02-26 09:16:02.747380344 +0000 UTC m=+3797.743317731" Feb 26 09:16:03 crc kubenswrapper[4741]: I0226 09:16:03.380498 4741 scope.go:117] "RemoveContainer" containerID="02f9906b9b45c7bfdaa6d7168f385984c426c2443b1af69e275dbe53b6bbd815" Feb 26 09:16:03 crc kubenswrapper[4741]: I0226 09:16:03.735550 4741 generic.go:334] "Generic (PLEG): container finished" podID="68ed6025-5e29-4978-9df4-feacc84d75f9" containerID="52db6189d470d8110de4efd82104984873621986a64b16c023322f96a367488d" exitCode=0 Feb 26 09:16:03 crc kubenswrapper[4741]: I0226 09:16:03.735878 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534956-sqp25" event={"ID":"68ed6025-5e29-4978-9df4-feacc84d75f9","Type":"ContainerDied","Data":"52db6189d470d8110de4efd82104984873621986a64b16c023322f96a367488d"} Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.163260 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.264945 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4fbq\" (UniqueName: \"kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq\") pod \"68ed6025-5e29-4978-9df4-feacc84d75f9\" (UID: \"68ed6025-5e29-4978-9df4-feacc84d75f9\") " Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.270475 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq" (OuterVolumeSpecName: "kube-api-access-f4fbq") pod "68ed6025-5e29-4978-9df4-feacc84d75f9" (UID: "68ed6025-5e29-4978-9df4-feacc84d75f9"). InnerVolumeSpecName "kube-api-access-f4fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.372119 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4fbq\" (UniqueName: \"kubernetes.io/projected/68ed6025-5e29-4978-9df4-feacc84d75f9-kube-api-access-f4fbq\") on node \"crc\" DevicePath \"\"" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.762221 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534956-sqp25" event={"ID":"68ed6025-5e29-4978-9df4-feacc84d75f9","Type":"ContainerDied","Data":"bf8b841eef5baa4bfc5bf5b438ca1c859411f96a4ba4f87cd0be20091b517b0d"} Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.762554 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8b841eef5baa4bfc5bf5b438ca1c859411f96a4ba4f87cd0be20091b517b0d" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.762335 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534956-sqp25" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.789710 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:16:05 crc kubenswrapper[4741]: E0226 09:16:05.790497 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.834950 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534950-zh98j"] Feb 26 09:16:05 crc kubenswrapper[4741]: I0226 09:16:05.848569 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534950-zh98j"] Feb 26 09:16:07 crc kubenswrapper[4741]: I0226 09:16:07.802846 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5acc5b7a-c5ff-4967-baf8-69cf3ac36b95" path="/var/lib/kubelet/pods/5acc5b7a-c5ff-4967-baf8-69cf3ac36b95/volumes" Feb 26 09:16:16 crc kubenswrapper[4741]: I0226 09:16:16.788478 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:16:16 crc kubenswrapper[4741]: E0226 09:16:16.789627 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:16:28 crc kubenswrapper[4741]: I0226 09:16:28.787247 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:16:28 crc kubenswrapper[4741]: E0226 09:16:28.788236 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:16:42 crc kubenswrapper[4741]: I0226 09:16:42.788271 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:16:42 crc kubenswrapper[4741]: E0226 09:16:42.789473 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:16:54 crc kubenswrapper[4741]: I0226 09:16:54.788015 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:16:54 crc kubenswrapper[4741]: E0226 09:16:54.788856 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:17:03 crc kubenswrapper[4741]: I0226 09:17:03.532920 4741 scope.go:117] "RemoveContainer" containerID="8928970caa60aefa656245b57c5c91cc2d72fa090e6e9a484d3eafc6e892e17d" Feb 26 09:17:07 crc kubenswrapper[4741]: I0226 09:17:07.788916 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:17:07 crc kubenswrapper[4741]: E0226 09:17:07.789947 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:17:18 crc kubenswrapper[4741]: I0226 09:17:18.788569 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:17:18 crc kubenswrapper[4741]: E0226 09:17:18.790549 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:17:30 crc kubenswrapper[4741]: I0226 09:17:30.788565 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:17:30 crc kubenswrapper[4741]: E0226 09:17:30.789488 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:17:43 crc kubenswrapper[4741]: I0226 09:17:43.787693 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:17:43 crc kubenswrapper[4741]: E0226 09:17:43.788674 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:17:57 crc kubenswrapper[4741]: I0226 09:17:57.788883 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:17:57 crc kubenswrapper[4741]: E0226 09:17:57.789753 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.160519 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534958-wzqw8"] Feb 26 09:18:00 crc kubenswrapper[4741]: E0226 09:18:00.162148 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ed6025-5e29-4978-9df4-feacc84d75f9" containerName="oc" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.162170 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ed6025-5e29-4978-9df4-feacc84d75f9" containerName="oc" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.162513 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ed6025-5e29-4978-9df4-feacc84d75f9" containerName="oc" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.163794 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.167284 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.168171 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.168346 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.170750 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534958-wzqw8"] Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.217566 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bhf4\" (UniqueName: \"kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4\") pod \"auto-csr-approver-29534958-wzqw8\" (UID: \"fa45ee63-b020-4962-a665-7010d49ff027\") " pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.320401 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bhf4\" (UniqueName: \"kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4\") pod \"auto-csr-approver-29534958-wzqw8\" (UID: \"fa45ee63-b020-4962-a665-7010d49ff027\") " pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.343736 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bhf4\" (UniqueName: \"kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4\") pod \"auto-csr-approver-29534958-wzqw8\" (UID: \"fa45ee63-b020-4962-a665-7010d49ff027\") " pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:00 crc kubenswrapper[4741]: I0226 09:18:00.491028 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:01 crc kubenswrapper[4741]: I0226 09:18:01.053047 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534958-wzqw8"] Feb 26 09:18:01 crc kubenswrapper[4741]: I0226 09:18:01.711370 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" event={"ID":"fa45ee63-b020-4962-a665-7010d49ff027","Type":"ContainerStarted","Data":"6f0d3fd1a5d5a396138148fadc725819d99ae00329126370f76dd9f74669cc2d"} Feb 26 09:18:03 crc kubenswrapper[4741]: I0226 09:18:03.736004 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa45ee63-b020-4962-a665-7010d49ff027" containerID="baa5013c20282191ff3ac520802b788a156da62a7a1b5195d1de30b9b4c9e0e2" exitCode=0 Feb 26 09:18:03 crc kubenswrapper[4741]: I0226 09:18:03.737197 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" event={"ID":"fa45ee63-b020-4962-a665-7010d49ff027","Type":"ContainerDied","Data":"baa5013c20282191ff3ac520802b788a156da62a7a1b5195d1de30b9b4c9e0e2"} Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.166221 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.293764 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bhf4\" (UniqueName: \"kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4\") pod \"fa45ee63-b020-4962-a665-7010d49ff027\" (UID: \"fa45ee63-b020-4962-a665-7010d49ff027\") " Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.301073 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4" (OuterVolumeSpecName: "kube-api-access-2bhf4") pod "fa45ee63-b020-4962-a665-7010d49ff027" (UID: "fa45ee63-b020-4962-a665-7010d49ff027"). InnerVolumeSpecName "kube-api-access-2bhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.403953 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bhf4\" (UniqueName: \"kubernetes.io/projected/fa45ee63-b020-4962-a665-7010d49ff027-kube-api-access-2bhf4\") on node \"crc\" DevicePath \"\"" Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.763500 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" event={"ID":"fa45ee63-b020-4962-a665-7010d49ff027","Type":"ContainerDied","Data":"6f0d3fd1a5d5a396138148fadc725819d99ae00329126370f76dd9f74669cc2d"} Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.763552 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534958-wzqw8" Feb 26 09:18:05 crc kubenswrapper[4741]: I0226 09:18:05.763556 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0d3fd1a5d5a396138148fadc725819d99ae00329126370f76dd9f74669cc2d" Feb 26 09:18:06 crc kubenswrapper[4741]: I0226 09:18:06.263665 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534952-dmkts"] Feb 26 09:18:06 crc kubenswrapper[4741]: I0226 09:18:06.275356 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534952-dmkts"] Feb 26 09:18:07 crc kubenswrapper[4741]: I0226 09:18:07.801676 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5676ab0-d942-4f77-9788-cc6cee938d79" path="/var/lib/kubelet/pods/b5676ab0-d942-4f77-9788-cc6cee938d79/volumes" Feb 26 09:18:12 crc kubenswrapper[4741]: I0226 09:18:12.788396 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:18:12 crc kubenswrapper[4741]: E0226 09:18:12.789449 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:18:24 crc kubenswrapper[4741]: I0226 09:18:24.788062 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:18:24 crc kubenswrapper[4741]: E0226 09:18:24.789068 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:18:35 crc kubenswrapper[4741]: I0226 09:18:35.803844 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:18:35 crc kubenswrapper[4741]: E0226 09:18:35.804891 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:18:49 crc kubenswrapper[4741]: I0226 09:18:49.789465 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:18:49 crc kubenswrapper[4741]: E0226 09:18:49.790450 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:19:03 crc kubenswrapper[4741]: I0226 09:19:03.656459 4741 scope.go:117] "RemoveContainer" containerID="c2c3ffacc0f3bf640fc4d732fa95502ba63b5c62ade1c4ca9eb926b963df947f" Feb 26 09:19:03 crc kubenswrapper[4741]: I0226 09:19:03.788647 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:19:03 crc kubenswrapper[4741]: E0226 09:19:03.789360 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:19:15 crc kubenswrapper[4741]: I0226 09:19:15.798138 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:19:15 crc kubenswrapper[4741]: E0226 09:19:15.799577 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:19:29 crc kubenswrapper[4741]: I0226 09:19:29.788539 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:19:29 crc kubenswrapper[4741]: E0226 09:19:29.789397 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:19:42 crc kubenswrapper[4741]: I0226 09:19:42.787572 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:19:42 crc kubenswrapper[4741]: E0226 09:19:42.788479 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:19:56 crc kubenswrapper[4741]: I0226 09:19:56.788176 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:19:56 crc kubenswrapper[4741]: E0226 09:19:56.789896 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.159126 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534960-4dgg4"] Feb 26 09:20:00 crc kubenswrapper[4741]: E0226 09:20:00.160350 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa45ee63-b020-4962-a665-7010d49ff027" containerName="oc" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.160373 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa45ee63-b020-4962-a665-7010d49ff027" containerName="oc" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.160711 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa45ee63-b020-4962-a665-7010d49ff027" containerName="oc" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.162477 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.167701 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.167773 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.167811 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.245008 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534960-4dgg4"] Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.356780 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrq2\" (UniqueName: \"kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2\") pod \"auto-csr-approver-29534960-4dgg4\" (UID: \"baa8885c-d2d3-4297-926d-0fb3d69c17cc\") " pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.459554 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrq2\" (UniqueName: \"kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2\") pod \"auto-csr-approver-29534960-4dgg4\" (UID: \"baa8885c-d2d3-4297-926d-0fb3d69c17cc\") " pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.482368 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrq2\" (UniqueName: \"kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2\") pod \"auto-csr-approver-29534960-4dgg4\" (UID: \"baa8885c-d2d3-4297-926d-0fb3d69c17cc\") " pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:00 crc kubenswrapper[4741]: I0226 09:20:00.488931 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:01 crc kubenswrapper[4741]: I0226 09:20:01.029861 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534960-4dgg4"] Feb 26 09:20:01 crc kubenswrapper[4741]: I0226 09:20:01.328599 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" event={"ID":"baa8885c-d2d3-4297-926d-0fb3d69c17cc","Type":"ContainerStarted","Data":"dc0f5723f4b09c3402228081aacd403b6e150612bf7436b77471f7730690a839"} Feb 26 09:20:03 crc kubenswrapper[4741]: I0226 09:20:03.374690 4741 generic.go:334] "Generic (PLEG): container finished" podID="baa8885c-d2d3-4297-926d-0fb3d69c17cc" containerID="4f4510fdb19c8ffc1fc5e14b6e18c5e8b46a371b67f18323139330eb3a8cd66b" exitCode=0 Feb 26 09:20:03 crc kubenswrapper[4741]: I0226 09:20:03.375208 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" event={"ID":"baa8885c-d2d3-4297-926d-0fb3d69c17cc","Type":"ContainerDied","Data":"4f4510fdb19c8ffc1fc5e14b6e18c5e8b46a371b67f18323139330eb3a8cd66b"} Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.045459 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.132706 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdrq2\" (UniqueName: \"kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2\") pod \"baa8885c-d2d3-4297-926d-0fb3d69c17cc\" (UID: \"baa8885c-d2d3-4297-926d-0fb3d69c17cc\") " Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.139453 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2" (OuterVolumeSpecName: "kube-api-access-mdrq2") pod "baa8885c-d2d3-4297-926d-0fb3d69c17cc" (UID: "baa8885c-d2d3-4297-926d-0fb3d69c17cc"). InnerVolumeSpecName "kube-api-access-mdrq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.237358 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdrq2\" (UniqueName: \"kubernetes.io/projected/baa8885c-d2d3-4297-926d-0fb3d69c17cc-kube-api-access-mdrq2\") on node \"crc\" DevicePath \"\"" Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.403830 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" event={"ID":"baa8885c-d2d3-4297-926d-0fb3d69c17cc","Type":"ContainerDied","Data":"dc0f5723f4b09c3402228081aacd403b6e150612bf7436b77471f7730690a839"} Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.403903 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0f5723f4b09c3402228081aacd403b6e150612bf7436b77471f7730690a839" Feb 26 09:20:05 crc kubenswrapper[4741]: I0226 09:20:05.403937 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534960-4dgg4" Feb 26 09:20:06 crc kubenswrapper[4741]: I0226 09:20:06.131303 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534954-f5jq9"] Feb 26 09:20:06 crc kubenswrapper[4741]: I0226 09:20:06.144550 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534954-f5jq9"] Feb 26 09:20:07 crc kubenswrapper[4741]: I0226 09:20:07.809737 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ebe9765-3be0-4143-87c1-e1fcc10ce481" path="/var/lib/kubelet/pods/3ebe9765-3be0-4143-87c1-e1fcc10ce481/volumes" Feb 26 09:20:11 crc kubenswrapper[4741]: I0226 09:20:11.788408 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:20:11 crc kubenswrapper[4741]: E0226 09:20:11.789396 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:20:24 crc kubenswrapper[4741]: I0226 09:20:24.788901 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:20:24 crc kubenswrapper[4741]: E0226 09:20:24.790191 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.835913 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:30 crc kubenswrapper[4741]: E0226 09:20:30.838065 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa8885c-d2d3-4297-926d-0fb3d69c17cc" containerName="oc" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.838169 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa8885c-d2d3-4297-926d-0fb3d69c17cc" containerName="oc" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.838485 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa8885c-d2d3-4297-926d-0fb3d69c17cc" containerName="oc" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.841208 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.871244 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.888707 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.888768 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbr2\" (UniqueName: \"kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.888948 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.992566 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.992829 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.992885 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nbr2\" (UniqueName: \"kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.993060 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:30 crc kubenswrapper[4741]: I0226 09:20:30.993355 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:31 crc kubenswrapper[4741]: I0226 09:20:31.017028 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nbr2\" (UniqueName: \"kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2\") pod \"community-operators-jfcsp\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:31 crc kubenswrapper[4741]: I0226 09:20:31.163591 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:31 crc kubenswrapper[4741]: I0226 09:20:31.772852 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:32 crc kubenswrapper[4741]: I0226 09:20:32.739949 4741 generic.go:334] "Generic (PLEG): container finished" podID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerID="8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8" exitCode=0 Feb 26 09:20:32 crc kubenswrapper[4741]: I0226 09:20:32.740325 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerDied","Data":"8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8"} Feb 26 09:20:32 crc kubenswrapper[4741]: I0226 09:20:32.740373 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerStarted","Data":"6385c82c5ca394ef712a6c8b92f4b88511768b44941d57f1e93b0432809199c5"} Feb 26 09:20:33 crc kubenswrapper[4741]: I0226 09:20:33.755745 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerStarted","Data":"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58"} Feb 26 09:20:37 crc kubenswrapper[4741]: I0226 09:20:37.813096 4741 generic.go:334] "Generic (PLEG): container finished" podID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerID="ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58" exitCode=0 Feb 26 09:20:37 crc kubenswrapper[4741]: I0226 09:20:37.813159 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerDied","Data":"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58"} Feb 26 09:20:38 crc kubenswrapper[4741]: I0226 09:20:38.789477 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:20:38 crc kubenswrapper[4741]: I0226 09:20:38.829574 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerStarted","Data":"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b"} Feb 26 09:20:38 crc kubenswrapper[4741]: I0226 09:20:38.858130 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jfcsp" podStartSLOduration=3.365691508 podStartE2EDuration="8.858071548s" podCreationTimestamp="2026-02-26 09:20:30 +0000 UTC" firstStartedPulling="2026-02-26 09:20:32.743901614 +0000 UTC m=+4067.739839021" lastFinishedPulling="2026-02-26 09:20:38.236281674 +0000 UTC m=+4073.232219061" observedRunningTime="2026-02-26 09:20:38.854950389 +0000 UTC m=+4073.850887766" watchObservedRunningTime="2026-02-26 09:20:38.858071548 +0000 UTC m=+4073.854008935" Feb 26 09:20:39 crc kubenswrapper[4741]: I0226 09:20:39.863347 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7"} Feb 26 09:20:41 crc kubenswrapper[4741]: I0226 09:20:41.164030 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:41 crc kubenswrapper[4741]: I0226 09:20:41.164839 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:42 crc kubenswrapper[4741]: I0226 09:20:42.237775 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jfcsp" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="registry-server" probeResult="failure" output=< Feb 26 09:20:42 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:20:42 crc kubenswrapper[4741]: > Feb 26 09:20:51 crc kubenswrapper[4741]: I0226 09:20:51.226727 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:51 crc kubenswrapper[4741]: I0226 09:20:51.311348 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:51 crc kubenswrapper[4741]: I0226 09:20:51.469808 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:53 crc kubenswrapper[4741]: I0226 09:20:53.035094 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jfcsp" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="registry-server" containerID="cri-o://1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b" gracePeriod=2 Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.025069 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.048692 4741 generic.go:334] "Generic (PLEG): container finished" podID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerID="1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b" exitCode=0 Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.048749 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerDied","Data":"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b"} Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.048789 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfcsp" event={"ID":"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb","Type":"ContainerDied","Data":"6385c82c5ca394ef712a6c8b92f4b88511768b44941d57f1e93b0432809199c5"} Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.048810 4741 scope.go:117] "RemoveContainer" containerID="1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.048985 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfcsp" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.086227 4741 scope.go:117] "RemoveContainer" containerID="ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.116654 4741 scope.go:117] "RemoveContainer" containerID="8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.146998 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nbr2\" (UniqueName: \"kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2\") pod \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.147097 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities\") pod \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.147380 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content\") pod \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\" (UID: \"f9cf07ab-34d2-4597-a0d4-839b94a4f9bb\") " Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.149714 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities" (OuterVolumeSpecName: "utilities") pod "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" (UID: "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.157460 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2" (OuterVolumeSpecName: "kube-api-access-8nbr2") pod "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" (UID: "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb"). InnerVolumeSpecName "kube-api-access-8nbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.208572 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" (UID: "f9cf07ab-34d2-4597-a0d4-839b94a4f9bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.236738 4741 scope.go:117] "RemoveContainer" containerID="1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b" Feb 26 09:20:54 crc kubenswrapper[4741]: E0226 09:20:54.237387 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b\": container with ID starting with 1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b not found: ID does not exist" containerID="1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.237449 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b"} err="failed to get container status \"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b\": rpc error: code = NotFound desc = could not find container \"1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b\": container with ID starting with 1983dcc92357fcadf64210817bb16ec9d2b71427c2be082a191a3f73e400c48b not found: ID does not exist" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.237474 4741 scope.go:117] "RemoveContainer" containerID="ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58" Feb 26 09:20:54 crc kubenswrapper[4741]: E0226 09:20:54.237760 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58\": container with ID starting with ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58 not found: ID does not exist" containerID="ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.237840 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58"} err="failed to get container status \"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58\": rpc error: code = NotFound desc = could not find container \"ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58\": container with ID starting with ca7c8f45f98a33feedc46e8efa5cd3b10eccb668d4ade5718fa8030e29053b58 not found: ID does not exist" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.237907 4741 scope.go:117] "RemoveContainer" containerID="8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8" Feb 26 09:20:54 crc kubenswrapper[4741]: E0226 09:20:54.238805 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8\": container with ID starting with 8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8 not found: ID does not exist" containerID="8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.238882 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8"} err="failed to get container status \"8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8\": rpc error: code = NotFound desc = could not find container \"8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8\": container with ID starting with 8613e71b24db5c3fb26f1315a6f09eb1e24a9484d8743bf4a5606f78988235b8 not found: ID does not exist" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.251473 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.251507 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.251523 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nbr2\" (UniqueName: \"kubernetes.io/projected/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb-kube-api-access-8nbr2\") on node \"crc\" DevicePath \"\"" Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.397554 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:54 crc kubenswrapper[4741]: I0226 09:20:54.413925 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jfcsp"] Feb 26 09:20:55 crc kubenswrapper[4741]: I0226 09:20:55.805595 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" path="/var/lib/kubelet/pods/f9cf07ab-34d2-4597-a0d4-839b94a4f9bb/volumes" Feb 26 09:21:03 crc kubenswrapper[4741]: I0226 09:21:03.804791 4741 scope.go:117] "RemoveContainer" containerID="9e2b948c4bab0fb4492f4d960eeadacb92912f309d41bbf32a30a9a7d0dfac0e" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.174624 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534962-b5s4m"] Feb 26 09:22:00 crc kubenswrapper[4741]: E0226 09:22:00.175882 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="extract-content" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.175900 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="extract-content" Feb 26 09:22:00 crc kubenswrapper[4741]: E0226 09:22:00.175968 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="extract-utilities" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.175977 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="extract-utilities" Feb 26 09:22:00 crc kubenswrapper[4741]: E0226 09:22:00.176006 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="registry-server" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.176014 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="registry-server" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.176377 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cf07ab-34d2-4597-a0d4-839b94a4f9bb" containerName="registry-server" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.177702 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.181315 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.181390 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.182128 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.186655 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534962-b5s4m"] Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.331781 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq9gv\" (UniqueName: \"kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv\") pod \"auto-csr-approver-29534962-b5s4m\" (UID: \"b3bade60-1e04-4198-bac4-f215547e197d\") " pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.437292 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq9gv\" (UniqueName: \"kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv\") pod \"auto-csr-approver-29534962-b5s4m\" (UID: \"b3bade60-1e04-4198-bac4-f215547e197d\") " pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.477168 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq9gv\" (UniqueName: \"kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv\") pod \"auto-csr-approver-29534962-b5s4m\" (UID: \"b3bade60-1e04-4198-bac4-f215547e197d\") " pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:00 crc kubenswrapper[4741]: I0226 09:22:00.506839 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:01 crc kubenswrapper[4741]: I0226 09:22:01.168307 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534962-b5s4m"] Feb 26 09:22:01 crc kubenswrapper[4741]: I0226 09:22:01.168913 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:22:01 crc kubenswrapper[4741]: I0226 09:22:01.990637 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" event={"ID":"b3bade60-1e04-4198-bac4-f215547e197d","Type":"ContainerStarted","Data":"6676809be29ddf689a05098b74d9d256d5d371eb381642e46fd250b9abff05f8"} Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.004594 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" event={"ID":"b3bade60-1e04-4198-bac4-f215547e197d","Type":"ContainerStarted","Data":"5622d4bb63520ac537559b49863699638b0999d57b677482c39d84a880b095ae"} Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.034372 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" podStartSLOduration=2.10376474 podStartE2EDuration="3.0343447s" podCreationTimestamp="2026-02-26 09:22:00 +0000 UTC" firstStartedPulling="2026-02-26 09:22:01.16862459 +0000 UTC m=+4156.164561977" lastFinishedPulling="2026-02-26 09:22:02.09920455 +0000 UTC m=+4157.095141937" observedRunningTime="2026-02-26 09:22:03.023651247 +0000 UTC m=+4158.019588634" watchObservedRunningTime="2026-02-26 09:22:03.0343447 +0000 UTC m=+4158.030282087" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.624448 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.628857 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.643135 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.746620 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.746707 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.746802 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25fz\" (UniqueName: \"kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.849465 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.849740 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.849876 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r25fz\" (UniqueName: \"kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.850236 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.850546 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.889287 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r25fz\" (UniqueName: \"kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz\") pod \"redhat-operators-rkrfr\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:03 crc kubenswrapper[4741]: I0226 09:22:03.985975 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:04 crc kubenswrapper[4741]: I0226 09:22:04.033685 4741 generic.go:334] "Generic (PLEG): container finished" podID="b3bade60-1e04-4198-bac4-f215547e197d" containerID="5622d4bb63520ac537559b49863699638b0999d57b677482c39d84a880b095ae" exitCode=0 Feb 26 09:22:04 crc kubenswrapper[4741]: I0226 09:22:04.033743 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" event={"ID":"b3bade60-1e04-4198-bac4-f215547e197d","Type":"ContainerDied","Data":"5622d4bb63520ac537559b49863699638b0999d57b677482c39d84a880b095ae"} Feb 26 09:22:04 crc kubenswrapper[4741]: I0226 09:22:04.588368 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:04 crc kubenswrapper[4741]: W0226 09:22:04.590685 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ebf79c_81f6_4f13_8e09_e58ecde739a6.slice/crio-2efe8ccb084f90a2fcdf10e08414ee89886be1934393565ccdf388f65c9e4eb2 WatchSource:0}: Error finding container 2efe8ccb084f90a2fcdf10e08414ee89886be1934393565ccdf388f65c9e4eb2: Status 404 returned error can't find the container with id 2efe8ccb084f90a2fcdf10e08414ee89886be1934393565ccdf388f65c9e4eb2 Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.054037 4741 generic.go:334] "Generic (PLEG): container finished" podID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerID="44b355e5bdbf8a674381301048fabf8c622847b0a09854e403aaa799b78c156f" exitCode=0 Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.054193 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerDied","Data":"44b355e5bdbf8a674381301048fabf8c622847b0a09854e403aaa799b78c156f"} Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.054497 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerStarted","Data":"2efe8ccb084f90a2fcdf10e08414ee89886be1934393565ccdf388f65c9e4eb2"} Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.557489 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.624401 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq9gv\" (UniqueName: \"kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv\") pod \"b3bade60-1e04-4198-bac4-f215547e197d\" (UID: \"b3bade60-1e04-4198-bac4-f215547e197d\") " Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.650532 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv" (OuterVolumeSpecName: "kube-api-access-vq9gv") pod "b3bade60-1e04-4198-bac4-f215547e197d" (UID: "b3bade60-1e04-4198-bac4-f215547e197d"). InnerVolumeSpecName "kube-api-access-vq9gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:22:05 crc kubenswrapper[4741]: I0226 09:22:05.737931 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq9gv\" (UniqueName: \"kubernetes.io/projected/b3bade60-1e04-4198-bac4-f215547e197d-kube-api-access-vq9gv\") on node \"crc\" DevicePath \"\"" Feb 26 09:22:06 crc kubenswrapper[4741]: I0226 09:22:06.072164 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" event={"ID":"b3bade60-1e04-4198-bac4-f215547e197d","Type":"ContainerDied","Data":"6676809be29ddf689a05098b74d9d256d5d371eb381642e46fd250b9abff05f8"} Feb 26 09:22:06 crc kubenswrapper[4741]: I0226 09:22:06.072236 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6676809be29ddf689a05098b74d9d256d5d371eb381642e46fd250b9abff05f8" Feb 26 09:22:06 crc kubenswrapper[4741]: I0226 09:22:06.072265 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534962-b5s4m" Feb 26 09:22:06 crc kubenswrapper[4741]: I0226 09:22:06.119382 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534956-sqp25"] Feb 26 09:22:06 crc kubenswrapper[4741]: I0226 09:22:06.131371 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534956-sqp25"] Feb 26 09:22:07 crc kubenswrapper[4741]: I0226 09:22:07.089589 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerStarted","Data":"3f375f69f5597cc8078320be071cc363360110aebf1b88ff5dcb4b9a451d5931"} Feb 26 09:22:07 crc kubenswrapper[4741]: I0226 09:22:07.802024 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ed6025-5e29-4978-9df4-feacc84d75f9" path="/var/lib/kubelet/pods/68ed6025-5e29-4978-9df4-feacc84d75f9/volumes" Feb 26 09:22:20 crc kubenswrapper[4741]: I0226 09:22:20.256159 4741 generic.go:334] "Generic (PLEG): container finished" podID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerID="3f375f69f5597cc8078320be071cc363360110aebf1b88ff5dcb4b9a451d5931" exitCode=0 Feb 26 09:22:20 crc kubenswrapper[4741]: I0226 09:22:20.256254 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerDied","Data":"3f375f69f5597cc8078320be071cc363360110aebf1b88ff5dcb4b9a451d5931"} Feb 26 09:22:25 crc kubenswrapper[4741]: I0226 09:22:25.319884 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerStarted","Data":"11dc8ebab438774611d31232faa4c301774ff0b0c661a622229b0ed3ef36bee6"} Feb 26 09:22:26 crc kubenswrapper[4741]: I0226 09:22:26.375805 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rkrfr" podStartSLOduration=3.531482803 podStartE2EDuration="23.375779876s" podCreationTimestamp="2026-02-26 09:22:03 +0000 UTC" firstStartedPulling="2026-02-26 09:22:05.056982362 +0000 UTC m=+4160.052919739" lastFinishedPulling="2026-02-26 09:22:24.901279425 +0000 UTC m=+4179.897216812" observedRunningTime="2026-02-26 09:22:26.358192477 +0000 UTC m=+4181.354129864" watchObservedRunningTime="2026-02-26 09:22:26.375779876 +0000 UTC m=+4181.371717263" Feb 26 09:22:33 crc kubenswrapper[4741]: I0226 09:22:33.986349 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:33 crc kubenswrapper[4741]: I0226 09:22:33.986816 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:35 crc kubenswrapper[4741]: I0226 09:22:35.060001 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rkrfr" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" probeResult="failure" output=< Feb 26 09:22:35 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:22:35 crc kubenswrapper[4741]: > Feb 26 09:22:45 crc kubenswrapper[4741]: I0226 09:22:45.038927 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rkrfr" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" probeResult="failure" output=< Feb 26 09:22:45 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:22:45 crc kubenswrapper[4741]: > Feb 26 09:22:54 crc kubenswrapper[4741]: I0226 09:22:54.049976 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:54 crc kubenswrapper[4741]: I0226 09:22:54.109552 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:54 crc kubenswrapper[4741]: I0226 09:22:54.299741 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:55 crc kubenswrapper[4741]: I0226 09:22:55.149709 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:22:55 crc kubenswrapper[4741]: I0226 09:22:55.150103 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:22:55 crc kubenswrapper[4741]: I0226 09:22:55.749064 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rkrfr" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" containerID="cri-o://11dc8ebab438774611d31232faa4c301774ff0b0c661a622229b0ed3ef36bee6" gracePeriod=2 Feb 26 09:22:56 crc kubenswrapper[4741]: I0226 09:22:56.775315 4741 generic.go:334] "Generic (PLEG): container finished" podID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerID="11dc8ebab438774611d31232faa4c301774ff0b0c661a622229b0ed3ef36bee6" exitCode=0 Feb 26 09:22:56 crc kubenswrapper[4741]: I0226 09:22:56.776037 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerDied","Data":"11dc8ebab438774611d31232faa4c301774ff0b0c661a622229b0ed3ef36bee6"} Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.199316 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.267989 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities\") pod \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.268363 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content\") pod \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.268490 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r25fz\" (UniqueName: \"kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz\") pod \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\" (UID: \"32ebf79c-81f6-4f13-8e09-e58ecde739a6\") " Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.268989 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities" (OuterVolumeSpecName: "utilities") pod "32ebf79c-81f6-4f13-8e09-e58ecde739a6" (UID: "32ebf79c-81f6-4f13-8e09-e58ecde739a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.269832 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.277426 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz" (OuterVolumeSpecName: "kube-api-access-r25fz") pod "32ebf79c-81f6-4f13-8e09-e58ecde739a6" (UID: "32ebf79c-81f6-4f13-8e09-e58ecde739a6"). InnerVolumeSpecName "kube-api-access-r25fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.372679 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r25fz\" (UniqueName: \"kubernetes.io/projected/32ebf79c-81f6-4f13-8e09-e58ecde739a6-kube-api-access-r25fz\") on node \"crc\" DevicePath \"\"" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.407890 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ebf79c-81f6-4f13-8e09-e58ecde739a6" (UID: "32ebf79c-81f6-4f13-8e09-e58ecde739a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.475752 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ebf79c-81f6-4f13-8e09-e58ecde739a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.793527 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkrfr" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.809452 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkrfr" event={"ID":"32ebf79c-81f6-4f13-8e09-e58ecde739a6","Type":"ContainerDied","Data":"2efe8ccb084f90a2fcdf10e08414ee89886be1934393565ccdf388f65c9e4eb2"} Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.809528 4741 scope.go:117] "RemoveContainer" containerID="11dc8ebab438774611d31232faa4c301774ff0b0c661a622229b0ed3ef36bee6" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.846131 4741 scope.go:117] "RemoveContainer" containerID="3f375f69f5597cc8078320be071cc363360110aebf1b88ff5dcb4b9a451d5931" Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.846364 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.858728 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rkrfr"] Feb 26 09:22:57 crc kubenswrapper[4741]: I0226 09:22:57.877594 4741 scope.go:117] "RemoveContainer" containerID="44b355e5bdbf8a674381301048fabf8c622847b0a09854e403aaa799b78c156f" Feb 26 09:22:59 crc kubenswrapper[4741]: I0226 09:22:59.803871 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" path="/var/lib/kubelet/pods/32ebf79c-81f6-4f13-8e09-e58ecde739a6/volumes" Feb 26 09:23:04 crc kubenswrapper[4741]: I0226 09:23:04.028878 4741 scope.go:117] "RemoveContainer" containerID="52db6189d470d8110de4efd82104984873621986a64b16c023322f96a367488d" Feb 26 09:23:25 crc kubenswrapper[4741]: I0226 09:23:25.149391 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:23:25 crc kubenswrapper[4741]: I0226 09:23:25.150147 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.519656 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:23:48 crc kubenswrapper[4741]: E0226 09:23:48.521349 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="extract-utilities" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521368 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="extract-utilities" Feb 26 09:23:48 crc kubenswrapper[4741]: E0226 09:23:48.521382 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="extract-content" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521389 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="extract-content" Feb 26 09:23:48 crc kubenswrapper[4741]: E0226 09:23:48.521414 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521421 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" Feb 26 09:23:48 crc kubenswrapper[4741]: E0226 09:23:48.521435 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bade60-1e04-4198-bac4-f215547e197d" containerName="oc" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521516 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bade60-1e04-4198-bac4-f215547e197d" containerName="oc" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521833 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ebf79c-81f6-4f13-8e09-e58ecde739a6" containerName="registry-server" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.521855 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bade60-1e04-4198-bac4-f215547e197d" containerName="oc" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.524365 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.531597 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.694748 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.695154 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz5t\" (UniqueName: \"kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.695968 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.799341 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.799566 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.799614 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wz5t\" (UniqueName: \"kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.800078 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.800138 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.828812 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wz5t\" (UniqueName: \"kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t\") pod \"certified-operators-95k6f\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:48 crc kubenswrapper[4741]: I0226 09:23:48.852733 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:49 crc kubenswrapper[4741]: I0226 09:23:49.451429 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:23:50 crc kubenswrapper[4741]: I0226 09:23:50.456811 4741 generic.go:334] "Generic (PLEG): container finished" podID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerID="3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9" exitCode=0 Feb 26 09:23:50 crc kubenswrapper[4741]: I0226 09:23:50.456877 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerDied","Data":"3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9"} Feb 26 09:23:50 crc kubenswrapper[4741]: I0226 09:23:50.457192 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerStarted","Data":"9ff2652a4d6a51e94395473c3ef7b66e6f74cdef682f107a5f5a6a5f8ad697af"} Feb 26 09:23:52 crc kubenswrapper[4741]: I0226 09:23:52.483763 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerStarted","Data":"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f"} Feb 26 09:23:54 crc kubenswrapper[4741]: I0226 09:23:54.514232 4741 generic.go:334] "Generic (PLEG): container finished" podID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerID="5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f" exitCode=0 Feb 26 09:23:54 crc kubenswrapper[4741]: I0226 09:23:54.514549 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerDied","Data":"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f"} Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.149392 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.149710 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.149785 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.151171 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.151243 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7" gracePeriod=600 Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.549881 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7" exitCode=0 Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.550118 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7"} Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.550470 4741 scope.go:117] "RemoveContainer" containerID="37c3a6db9262c69d82858c3292d1b1995c8362168b4f196de767d769404151bf" Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.555191 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerStarted","Data":"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082"} Feb 26 09:23:55 crc kubenswrapper[4741]: I0226 09:23:55.584397 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-95k6f" podStartSLOduration=3.104838388 podStartE2EDuration="7.584318491s" podCreationTimestamp="2026-02-26 09:23:48 +0000 UTC" firstStartedPulling="2026-02-26 09:23:50.459255063 +0000 UTC m=+4265.455192450" lastFinishedPulling="2026-02-26 09:23:54.938735166 +0000 UTC m=+4269.934672553" observedRunningTime="2026-02-26 09:23:55.578529297 +0000 UTC m=+4270.574466704" watchObservedRunningTime="2026-02-26 09:23:55.584318491 +0000 UTC m=+4270.580255878" Feb 26 09:23:56 crc kubenswrapper[4741]: I0226 09:23:56.572660 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922"} Feb 26 09:23:58 crc kubenswrapper[4741]: I0226 09:23:58.853919 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:58 crc kubenswrapper[4741]: I0226 09:23:58.854617 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:23:59 crc kubenswrapper[4741]: I0226 09:23:59.085890 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.157585 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534964-8snt8"] Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.162244 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.166043 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.167187 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.169464 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.172542 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534964-8snt8"] Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.177223 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl7th\" (UniqueName: \"kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th\") pod \"auto-csr-approver-29534964-8snt8\" (UID: \"e66e0b27-0f2d-4c98-97c6-383c5e15e23f\") " pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.280521 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl7th\" (UniqueName: \"kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th\") pod \"auto-csr-approver-29534964-8snt8\" (UID: \"e66e0b27-0f2d-4c98-97c6-383c5e15e23f\") " pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.310473 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl7th\" (UniqueName: \"kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th\") pod \"auto-csr-approver-29534964-8snt8\" (UID: \"e66e0b27-0f2d-4c98-97c6-383c5e15e23f\") " pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:00 crc kubenswrapper[4741]: I0226 09:24:00.487988 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:01 crc kubenswrapper[4741]: I0226 09:24:01.017836 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534964-8snt8"] Feb 26 09:24:02 crc kubenswrapper[4741]: I0226 09:24:02.672691 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534964-8snt8" event={"ID":"e66e0b27-0f2d-4c98-97c6-383c5e15e23f","Type":"ContainerStarted","Data":"fb1e49574844d8d10d21765480b611f08f9bb7b4dd61985da9ceeaec63c038ef"} Feb 26 09:24:06 crc kubenswrapper[4741]: I0226 09:24:06.724752 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534964-8snt8" event={"ID":"e66e0b27-0f2d-4c98-97c6-383c5e15e23f","Type":"ContainerStarted","Data":"8468538425f0b57c050043cfe60c73ce9447f1906a1c4aebef6f91ca4ece7964"} Feb 26 09:24:06 crc kubenswrapper[4741]: I0226 09:24:06.757692 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534964-8snt8" podStartSLOduration=2.811812194 podStartE2EDuration="6.757668368s" podCreationTimestamp="2026-02-26 09:24:00 +0000 UTC" firstStartedPulling="2026-02-26 09:24:01.911198214 +0000 UTC m=+4276.907135611" lastFinishedPulling="2026-02-26 09:24:05.857054398 +0000 UTC m=+4280.852991785" observedRunningTime="2026-02-26 09:24:06.741464128 +0000 UTC m=+4281.737401515" watchObservedRunningTime="2026-02-26 09:24:06.757668368 +0000 UTC m=+4281.753605755" Feb 26 09:24:07 crc kubenswrapper[4741]: I0226 09:24:07.739686 4741 generic.go:334] "Generic (PLEG): container finished" podID="e66e0b27-0f2d-4c98-97c6-383c5e15e23f" containerID="8468538425f0b57c050043cfe60c73ce9447f1906a1c4aebef6f91ca4ece7964" exitCode=0 Feb 26 09:24:07 crc kubenswrapper[4741]: I0226 09:24:07.739744 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534964-8snt8" event={"ID":"e66e0b27-0f2d-4c98-97c6-383c5e15e23f","Type":"ContainerDied","Data":"8468538425f0b57c050043cfe60c73ce9447f1906a1c4aebef6f91ca4ece7964"} Feb 26 09:24:08 crc kubenswrapper[4741]: I0226 09:24:08.920917 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.005218 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.189479 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.307475 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl7th\" (UniqueName: \"kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th\") pod \"e66e0b27-0f2d-4c98-97c6-383c5e15e23f\" (UID: \"e66e0b27-0f2d-4c98-97c6-383c5e15e23f\") " Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.431822 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th" (OuterVolumeSpecName: "kube-api-access-pl7th") pod "e66e0b27-0f2d-4c98-97c6-383c5e15e23f" (UID: "e66e0b27-0f2d-4c98-97c6-383c5e15e23f"). InnerVolumeSpecName "kube-api-access-pl7th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.515875 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl7th\" (UniqueName: \"kubernetes.io/projected/e66e0b27-0f2d-4c98-97c6-383c5e15e23f-kube-api-access-pl7th\") on node \"crc\" DevicePath \"\"" Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.769887 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-95k6f" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="registry-server" containerID="cri-o://a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082" gracePeriod=2 Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.770030 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534964-8snt8" Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.771039 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534964-8snt8" event={"ID":"e66e0b27-0f2d-4c98-97c6-383c5e15e23f","Type":"ContainerDied","Data":"fb1e49574844d8d10d21765480b611f08f9bb7b4dd61985da9ceeaec63c038ef"} Feb 26 09:24:09 crc kubenswrapper[4741]: I0226 09:24:09.771161 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1e49574844d8d10d21765480b611f08f9bb7b4dd61985da9ceeaec63c038ef" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.282195 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534958-wzqw8"] Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.298798 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534958-wzqw8"] Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.368578 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.547061 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content\") pod \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.548992 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wz5t\" (UniqueName: \"kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t\") pod \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.549201 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities\") pod \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\" (UID: \"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c\") " Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.551736 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities" (OuterVolumeSpecName: "utilities") pod "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" (UID: "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.558032 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t" (OuterVolumeSpecName: "kube-api-access-6wz5t") pod "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" (UID: "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c"). InnerVolumeSpecName "kube-api-access-6wz5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.615072 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" (UID: "ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.653657 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.653710 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wz5t\" (UniqueName: \"kubernetes.io/projected/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-kube-api-access-6wz5t\") on node \"crc\" DevicePath \"\"" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.653732 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.786631 4741 generic.go:334] "Generic (PLEG): container finished" podID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerID="a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082" exitCode=0 Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.786693 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerDied","Data":"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082"} Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.786733 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95k6f" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.786756 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95k6f" event={"ID":"ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c","Type":"ContainerDied","Data":"9ff2652a4d6a51e94395473c3ef7b66e6f74cdef682f107a5f5a6a5f8ad697af"} Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.786782 4741 scope.go:117] "RemoveContainer" containerID="a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.825054 4741 scope.go:117] "RemoveContainer" containerID="5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.837178 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.848787 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-95k6f"] Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.853570 4741 scope.go:117] "RemoveContainer" containerID="3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.924584 4741 scope.go:117] "RemoveContainer" containerID="a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082" Feb 26 09:24:10 crc kubenswrapper[4741]: E0226 09:24:10.925132 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082\": container with ID starting with a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082 not found: ID does not exist" containerID="a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.925218 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082"} err="failed to get container status \"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082\": rpc error: code = NotFound desc = could not find container \"a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082\": container with ID starting with a8006f483f22ba65c762dfb084bf3da56d81b9f6e7948d134982ad9e22aea082 not found: ID does not exist" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.925258 4741 scope.go:117] "RemoveContainer" containerID="5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f" Feb 26 09:24:10 crc kubenswrapper[4741]: E0226 09:24:10.926478 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f\": container with ID starting with 5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f not found: ID does not exist" containerID="5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.926514 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f"} err="failed to get container status \"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f\": rpc error: code = NotFound desc = could not find container \"5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f\": container with ID starting with 5af1c56cc6f7c8b604cd95e5c44c1ac46864745baf078df6861020323d3eca8f not found: ID does not exist" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.926547 4741 scope.go:117] "RemoveContainer" containerID="3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9" Feb 26 09:24:10 crc kubenswrapper[4741]: E0226 09:24:10.927054 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9\": container with ID starting with 3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9 not found: ID does not exist" containerID="3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9" Feb 26 09:24:10 crc kubenswrapper[4741]: I0226 09:24:10.927076 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9"} err="failed to get container status \"3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9\": rpc error: code = NotFound desc = could not find container \"3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9\": container with ID starting with 3f28fd686b1bbdeb9ac93aa3adac066d5ac74c6e128d3d29cf5f3c460ec241f9 not found: ID does not exist" Feb 26 09:24:11 crc kubenswrapper[4741]: I0226 09:24:11.805131 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa45ee63-b020-4962-a665-7010d49ff027" path="/var/lib/kubelet/pods/fa45ee63-b020-4962-a665-7010d49ff027/volumes" Feb 26 09:24:11 crc kubenswrapper[4741]: I0226 09:24:11.808201 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" path="/var/lib/kubelet/pods/ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c/volumes" Feb 26 09:25:04 crc kubenswrapper[4741]: I0226 09:25:04.161970 4741 scope.go:117] "RemoveContainer" containerID="baa5013c20282191ff3ac520802b788a156da62a7a1b5195d1de30b9b4c9e0e2" Feb 26 09:25:55 crc kubenswrapper[4741]: I0226 09:25:55.149265 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:25:55 crc kubenswrapper[4741]: I0226 09:25:55.149910 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.159335 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534966-b7nls"] Feb 26 09:26:00 crc kubenswrapper[4741]: E0226 09:26:00.160708 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="registry-server" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.160731 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="registry-server" Feb 26 09:26:00 crc kubenswrapper[4741]: E0226 09:26:00.160758 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66e0b27-0f2d-4c98-97c6-383c5e15e23f" containerName="oc" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.160767 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66e0b27-0f2d-4c98-97c6-383c5e15e23f" containerName="oc" Feb 26 09:26:00 crc kubenswrapper[4741]: E0226 09:26:00.160800 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="extract-content" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.160812 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="extract-content" Feb 26 09:26:00 crc kubenswrapper[4741]: E0226 09:26:00.160850 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="extract-utilities" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.160858 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="extract-utilities" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.161181 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff6159ea-40e0-4bc8-8cd0-cda0e465ab2c" containerName="registry-server" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.161212 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66e0b27-0f2d-4c98-97c6-383c5e15e23f" containerName="oc" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.162556 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.165508 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.165730 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.165753 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.172509 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534966-b7nls"] Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.211489 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p\") pod \"auto-csr-approver-29534966-b7nls\" (UID: \"8ecab1ab-0c41-46cd-bc17-016aa9712b46\") " pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.315291 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p\") pod \"auto-csr-approver-29534966-b7nls\" (UID: \"8ecab1ab-0c41-46cd-bc17-016aa9712b46\") " pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.337248 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p\") pod \"auto-csr-approver-29534966-b7nls\" (UID: \"8ecab1ab-0c41-46cd-bc17-016aa9712b46\") " pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.484403 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:00 crc kubenswrapper[4741]: I0226 09:26:00.973813 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534966-b7nls"] Feb 26 09:26:01 crc kubenswrapper[4741]: I0226 09:26:01.231499 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534966-b7nls" event={"ID":"8ecab1ab-0c41-46cd-bc17-016aa9712b46","Type":"ContainerStarted","Data":"dcdfb1b9796b8c2e39e7cad5adf8eae5541277100be42db852798ee8c9661dd0"} Feb 26 09:26:03 crc kubenswrapper[4741]: I0226 09:26:03.263918 4741 generic.go:334] "Generic (PLEG): container finished" podID="8ecab1ab-0c41-46cd-bc17-016aa9712b46" containerID="b482827d4551292d5bc50744530eaaf3956375f93f43782b73b38dd94116c69c" exitCode=0 Feb 26 09:26:03 crc kubenswrapper[4741]: I0226 09:26:03.264370 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534966-b7nls" event={"ID":"8ecab1ab-0c41-46cd-bc17-016aa9712b46","Type":"ContainerDied","Data":"b482827d4551292d5bc50744530eaaf3956375f93f43782b73b38dd94116c69c"} Feb 26 09:26:04 crc kubenswrapper[4741]: I0226 09:26:04.715381 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:04 crc kubenswrapper[4741]: I0226 09:26:04.762076 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p\") pod \"8ecab1ab-0c41-46cd-bc17-016aa9712b46\" (UID: \"8ecab1ab-0c41-46cd-bc17-016aa9712b46\") " Feb 26 09:26:04 crc kubenswrapper[4741]: I0226 09:26:04.771000 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p" (OuterVolumeSpecName: "kube-api-access-fdp6p") pod "8ecab1ab-0c41-46cd-bc17-016aa9712b46" (UID: "8ecab1ab-0c41-46cd-bc17-016aa9712b46"). InnerVolumeSpecName "kube-api-access-fdp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:26:04 crc kubenswrapper[4741]: I0226 09:26:04.867783 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/8ecab1ab-0c41-46cd-bc17-016aa9712b46-kube-api-access-fdp6p\") on node \"crc\" DevicePath \"\"" Feb 26 09:26:05 crc kubenswrapper[4741]: I0226 09:26:05.288660 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534966-b7nls" event={"ID":"8ecab1ab-0c41-46cd-bc17-016aa9712b46","Type":"ContainerDied","Data":"dcdfb1b9796b8c2e39e7cad5adf8eae5541277100be42db852798ee8c9661dd0"} Feb 26 09:26:05 crc kubenswrapper[4741]: I0226 09:26:05.288715 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcdfb1b9796b8c2e39e7cad5adf8eae5541277100be42db852798ee8c9661dd0" Feb 26 09:26:05 crc kubenswrapper[4741]: I0226 09:26:05.288789 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534966-b7nls" Feb 26 09:26:05 crc kubenswrapper[4741]: I0226 09:26:05.831455 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534960-4dgg4"] Feb 26 09:26:05 crc kubenswrapper[4741]: I0226 09:26:05.847607 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534960-4dgg4"] Feb 26 09:26:07 crc kubenswrapper[4741]: I0226 09:26:07.806955 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa8885c-d2d3-4297-926d-0fb3d69c17cc" path="/var/lib/kubelet/pods/baa8885c-d2d3-4297-926d-0fb3d69c17cc/volumes" Feb 26 09:26:25 crc kubenswrapper[4741]: I0226 09:26:25.149577 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:26:25 crc kubenswrapper[4741]: I0226 09:26:25.150458 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:26:55 crc kubenswrapper[4741]: I0226 09:26:55.149516 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:26:55 crc kubenswrapper[4741]: I0226 09:26:55.150391 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:26:55 crc kubenswrapper[4741]: I0226 09:26:55.150480 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:26:55 crc kubenswrapper[4741]: I0226 09:26:55.152561 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:26:55 crc kubenswrapper[4741]: I0226 09:26:55.152817 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" gracePeriod=600 Feb 26 09:26:55 crc kubenswrapper[4741]: E0226 09:26:55.275870 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:26:56 crc kubenswrapper[4741]: I0226 09:26:56.000479 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" exitCode=0 Feb 26 09:26:56 crc kubenswrapper[4741]: I0226 09:26:56.000587 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922"} Feb 26 09:26:56 crc kubenswrapper[4741]: I0226 09:26:56.000896 4741 scope.go:117] "RemoveContainer" containerID="242c977854168a500c9033cc8bb3d73aa979a9e54f1cf38951314e068b03eea7" Feb 26 09:26:56 crc kubenswrapper[4741]: I0226 09:26:56.002005 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:26:56 crc kubenswrapper[4741]: E0226 09:26:56.002450 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:27:04 crc kubenswrapper[4741]: I0226 09:27:04.342500 4741 scope.go:117] "RemoveContainer" containerID="4f4510fdb19c8ffc1fc5e14b6e18c5e8b46a371b67f18323139330eb3a8cd66b" Feb 26 09:27:09 crc kubenswrapper[4741]: I0226 09:27:09.787776 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:27:09 crc kubenswrapper[4741]: E0226 09:27:09.788738 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.712503 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:11 crc kubenswrapper[4741]: E0226 09:27:11.713525 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecab1ab-0c41-46cd-bc17-016aa9712b46" containerName="oc" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.713544 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecab1ab-0c41-46cd-bc17-016aa9712b46" containerName="oc" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.713811 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecab1ab-0c41-46cd-bc17-016aa9712b46" containerName="oc" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.716043 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.723530 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.723788 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthp2\" (UniqueName: \"kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.724338 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.749090 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.827580 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.827799 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthp2\" (UniqueName: \"kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.827918 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.828501 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.828765 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:11 crc kubenswrapper[4741]: I0226 09:27:11.885254 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthp2\" (UniqueName: \"kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2\") pod \"redhat-marketplace-5wrfn\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:12 crc kubenswrapper[4741]: I0226 09:27:12.048277 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:12 crc kubenswrapper[4741]: I0226 09:27:12.548609 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:13 crc kubenswrapper[4741]: I0226 09:27:13.237448 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerStarted","Data":"a5908c073f70394c33604e004228fab16ff97823a527339d1307372fd36c0c93"} Feb 26 09:27:14 crc kubenswrapper[4741]: I0226 09:27:14.251628 4741 generic.go:334] "Generic (PLEG): container finished" podID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerID="2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701" exitCode=0 Feb 26 09:27:14 crc kubenswrapper[4741]: I0226 09:27:14.251712 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerDied","Data":"2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701"} Feb 26 09:27:14 crc kubenswrapper[4741]: I0226 09:27:14.254785 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:27:16 crc kubenswrapper[4741]: I0226 09:27:16.278504 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerStarted","Data":"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df"} Feb 26 09:27:17 crc kubenswrapper[4741]: I0226 09:27:17.296000 4741 generic.go:334] "Generic (PLEG): container finished" podID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerID="a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df" exitCode=0 Feb 26 09:27:17 crc kubenswrapper[4741]: I0226 09:27:17.296123 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerDied","Data":"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df"} Feb 26 09:27:18 crc kubenswrapper[4741]: I0226 09:27:18.314190 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerStarted","Data":"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60"} Feb 26 09:27:18 crc kubenswrapper[4741]: I0226 09:27:18.354425 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wrfn" podStartSLOduration=3.9120122630000003 podStartE2EDuration="7.354398603s" podCreationTimestamp="2026-02-26 09:27:11 +0000 UTC" firstStartedPulling="2026-02-26 09:27:14.254484149 +0000 UTC m=+4469.250421536" lastFinishedPulling="2026-02-26 09:27:17.696870479 +0000 UTC m=+4472.692807876" observedRunningTime="2026-02-26 09:27:18.345369797 +0000 UTC m=+4473.341307184" watchObservedRunningTime="2026-02-26 09:27:18.354398603 +0000 UTC m=+4473.350335990" Feb 26 09:27:21 crc kubenswrapper[4741]: I0226 09:27:21.791639 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:27:21 crc kubenswrapper[4741]: E0226 09:27:21.792435 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:27:22 crc kubenswrapper[4741]: I0226 09:27:22.048488 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:22 crc kubenswrapper[4741]: I0226 09:27:22.049190 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:22 crc kubenswrapper[4741]: I0226 09:27:22.102892 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:22 crc kubenswrapper[4741]: I0226 09:27:22.411504 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:22 crc kubenswrapper[4741]: I0226 09:27:22.494795 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:24 crc kubenswrapper[4741]: I0226 09:27:24.385714 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wrfn" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="registry-server" containerID="cri-o://ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60" gracePeriod=2 Feb 26 09:27:24 crc kubenswrapper[4741]: I0226 09:27:24.959276 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.055991 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wthp2\" (UniqueName: \"kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2\") pod \"799072e7-085a-4481-bacb-c0b4a32c39dc\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.056579 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities\") pod \"799072e7-085a-4481-bacb-c0b4a32c39dc\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.056626 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content\") pod \"799072e7-085a-4481-bacb-c0b4a32c39dc\" (UID: \"799072e7-085a-4481-bacb-c0b4a32c39dc\") " Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.057392 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities" (OuterVolumeSpecName: "utilities") pod "799072e7-085a-4481-bacb-c0b4a32c39dc" (UID: "799072e7-085a-4481-bacb-c0b4a32c39dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.058759 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.095888 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "799072e7-085a-4481-bacb-c0b4a32c39dc" (UID: "799072e7-085a-4481-bacb-c0b4a32c39dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.162420 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799072e7-085a-4481-bacb-c0b4a32c39dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.402533 4741 generic.go:334] "Generic (PLEG): container finished" podID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerID="ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60" exitCode=0 Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.402591 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerDied","Data":"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60"} Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.402629 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrfn" event={"ID":"799072e7-085a-4481-bacb-c0b4a32c39dc","Type":"ContainerDied","Data":"a5908c073f70394c33604e004228fab16ff97823a527339d1307372fd36c0c93"} Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.402629 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrfn" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.402651 4741 scope.go:117] "RemoveContainer" containerID="ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.432498 4741 scope.go:117] "RemoveContainer" containerID="a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.828431 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2" (OuterVolumeSpecName: "kube-api-access-wthp2") pod "799072e7-085a-4481-bacb-c0b4a32c39dc" (UID: "799072e7-085a-4481-bacb-c0b4a32c39dc"). InnerVolumeSpecName "kube-api-access-wthp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.887664 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wthp2\" (UniqueName: \"kubernetes.io/projected/799072e7-085a-4481-bacb-c0b4a32c39dc-kube-api-access-wthp2\") on node \"crc\" DevicePath \"\"" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.890589 4741 scope.go:117] "RemoveContainer" containerID="2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.969828 4741 scope.go:117] "RemoveContainer" containerID="ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60" Feb 26 09:27:25 crc kubenswrapper[4741]: E0226 09:27:25.970563 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60\": container with ID starting with ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60 not found: ID does not exist" containerID="ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.970623 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60"} err="failed to get container status \"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60\": rpc error: code = NotFound desc = could not find container \"ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60\": container with ID starting with ba35ab268e45d69da56296ec3951f256c16b246d255882b117514ed3248eac60 not found: ID does not exist" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.970661 4741 scope.go:117] "RemoveContainer" containerID="a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df" Feb 26 09:27:25 crc kubenswrapper[4741]: E0226 09:27:25.971412 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df\": container with ID starting with a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df not found: ID does not exist" containerID="a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.971551 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df"} err="failed to get container status \"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df\": rpc error: code = NotFound desc = could not find container \"a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df\": container with ID starting with a3ca6c3d64c31e87c257994b3020dbf40211bb2867d3f72161984c57233015df not found: ID does not exist" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.971690 4741 scope.go:117] "RemoveContainer" containerID="2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701" Feb 26 09:27:25 crc kubenswrapper[4741]: E0226 09:27:25.972337 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701\": container with ID starting with 2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701 not found: ID does not exist" containerID="2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701" Feb 26 09:27:25 crc kubenswrapper[4741]: I0226 09:27:25.972382 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701"} err="failed to get container status \"2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701\": rpc error: code = NotFound desc = could not find container \"2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701\": container with ID starting with 2e6e04eab7481101242a9738b0ad5ed013d1334b3c0e37289b0f56088b8f7701 not found: ID does not exist" Feb 26 09:27:26 crc kubenswrapper[4741]: I0226 09:27:26.046289 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:26 crc kubenswrapper[4741]: I0226 09:27:26.060996 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrfn"] Feb 26 09:27:27 crc kubenswrapper[4741]: I0226 09:27:27.802351 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" path="/var/lib/kubelet/pods/799072e7-085a-4481-bacb-c0b4a32c39dc/volumes" Feb 26 09:27:32 crc kubenswrapper[4741]: I0226 09:27:32.788590 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:27:32 crc kubenswrapper[4741]: E0226 09:27:32.790607 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:27:45 crc kubenswrapper[4741]: I0226 09:27:45.799332 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:27:45 crc kubenswrapper[4741]: E0226 09:27:45.800459 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:27:57 crc kubenswrapper[4741]: I0226 09:27:57.788339 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:27:57 crc kubenswrapper[4741]: E0226 09:27:57.789337 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.153322 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534968-dllb4"] Feb 26 09:28:00 crc kubenswrapper[4741]: E0226 09:28:00.154616 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="extract-utilities" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.154637 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="extract-utilities" Feb 26 09:28:00 crc kubenswrapper[4741]: E0226 09:28:00.154663 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="extract-content" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.154670 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="extract-content" Feb 26 09:28:00 crc kubenswrapper[4741]: E0226 09:28:00.154689 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="registry-server" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.154696 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="registry-server" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.154951 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="799072e7-085a-4481-bacb-c0b4a32c39dc" containerName="registry-server" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.156178 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.165979 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.166483 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.167359 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.177458 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534968-dllb4"] Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.240189 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vfk\" (UniqueName: \"kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk\") pod \"auto-csr-approver-29534968-dllb4\" (UID: \"72e62028-917f-48e3-a0ad-07c33f0e67c0\") " pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.344236 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vfk\" (UniqueName: \"kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk\") pod \"auto-csr-approver-29534968-dllb4\" (UID: \"72e62028-917f-48e3-a0ad-07c33f0e67c0\") " pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.363403 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vfk\" (UniqueName: \"kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk\") pod \"auto-csr-approver-29534968-dllb4\" (UID: \"72e62028-917f-48e3-a0ad-07c33f0e67c0\") " pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:00 crc kubenswrapper[4741]: I0226 09:28:00.479724 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:01 crc kubenswrapper[4741]: I0226 09:28:01.019487 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534968-dllb4"] Feb 26 09:28:01 crc kubenswrapper[4741]: I0226 09:28:01.881073 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534968-dllb4" event={"ID":"72e62028-917f-48e3-a0ad-07c33f0e67c0","Type":"ContainerStarted","Data":"dfeb8de9690d718981bfffa48ec5ddfafe86f92f9e7a9c7c8eb4349cc81b472b"} Feb 26 09:28:02 crc kubenswrapper[4741]: I0226 09:28:02.899331 4741 generic.go:334] "Generic (PLEG): container finished" podID="72e62028-917f-48e3-a0ad-07c33f0e67c0" containerID="99cce85029b51712a8ab578ff9290821894a55aa11d0082e4e65b14e7afd335c" exitCode=0 Feb 26 09:28:02 crc kubenswrapper[4741]: I0226 09:28:02.899493 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534968-dllb4" event={"ID":"72e62028-917f-48e3-a0ad-07c33f0e67c0","Type":"ContainerDied","Data":"99cce85029b51712a8ab578ff9290821894a55aa11d0082e4e65b14e7afd335c"} Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.341357 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.376338 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vfk\" (UniqueName: \"kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk\") pod \"72e62028-917f-48e3-a0ad-07c33f0e67c0\" (UID: \"72e62028-917f-48e3-a0ad-07c33f0e67c0\") " Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.382840 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk" (OuterVolumeSpecName: "kube-api-access-s8vfk") pod "72e62028-917f-48e3-a0ad-07c33f0e67c0" (UID: "72e62028-917f-48e3-a0ad-07c33f0e67c0"). InnerVolumeSpecName "kube-api-access-s8vfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.479929 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vfk\" (UniqueName: \"kubernetes.io/projected/72e62028-917f-48e3-a0ad-07c33f0e67c0-kube-api-access-s8vfk\") on node \"crc\" DevicePath \"\"" Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.928986 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534968-dllb4" event={"ID":"72e62028-917f-48e3-a0ad-07c33f0e67c0","Type":"ContainerDied","Data":"dfeb8de9690d718981bfffa48ec5ddfafe86f92f9e7a9c7c8eb4349cc81b472b"} Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.929048 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfeb8de9690d718981bfffa48ec5ddfafe86f92f9e7a9c7c8eb4349cc81b472b" Feb 26 09:28:04 crc kubenswrapper[4741]: I0226 09:28:04.929628 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534968-dllb4" Feb 26 09:28:05 crc kubenswrapper[4741]: I0226 09:28:05.497346 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534962-b5s4m"] Feb 26 09:28:05 crc kubenswrapper[4741]: I0226 09:28:05.510451 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534962-b5s4m"] Feb 26 09:28:05 crc kubenswrapper[4741]: I0226 09:28:05.806948 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3bade60-1e04-4198-bac4-f215547e197d" path="/var/lib/kubelet/pods/b3bade60-1e04-4198-bac4-f215547e197d/volumes" Feb 26 09:28:08 crc kubenswrapper[4741]: I0226 09:28:08.788087 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:28:08 crc kubenswrapper[4741]: E0226 09:28:08.788879 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:28:23 crc kubenswrapper[4741]: I0226 09:28:23.787609 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:28:23 crc kubenswrapper[4741]: E0226 09:28:23.789026 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:28:36 crc kubenswrapper[4741]: I0226 09:28:36.788335 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:28:36 crc kubenswrapper[4741]: E0226 09:28:36.789605 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:28:49 crc kubenswrapper[4741]: I0226 09:28:49.788701 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:28:49 crc kubenswrapper[4741]: E0226 09:28:49.790098 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:29:04 crc kubenswrapper[4741]: I0226 09:29:04.497760 4741 scope.go:117] "RemoveContainer" containerID="5622d4bb63520ac537559b49863699638b0999d57b677482c39d84a880b095ae" Feb 26 09:29:04 crc kubenswrapper[4741]: I0226 09:29:04.787526 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:29:04 crc kubenswrapper[4741]: E0226 09:29:04.787837 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:29:16 crc kubenswrapper[4741]: I0226 09:29:16.788333 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:29:16 crc kubenswrapper[4741]: E0226 09:29:16.789188 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:29:29 crc kubenswrapper[4741]: I0226 09:29:29.788967 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:29:29 crc kubenswrapper[4741]: E0226 09:29:29.789931 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:29:43 crc kubenswrapper[4741]: I0226 09:29:43.800875 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:29:43 crc kubenswrapper[4741]: E0226 09:29:43.803963 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:29:57 crc kubenswrapper[4741]: I0226 09:29:57.787453 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:29:57 crc kubenswrapper[4741]: E0226 09:29:57.788365 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.162570 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534970-mjm9d"] Feb 26 09:30:00 crc kubenswrapper[4741]: E0226 09:30:00.163803 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e62028-917f-48e3-a0ad-07c33f0e67c0" containerName="oc" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.163825 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e62028-917f-48e3-a0ad-07c33f0e67c0" containerName="oc" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.164226 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e62028-917f-48e3-a0ad-07c33f0e67c0" containerName="oc" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.165700 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.169200 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.170293 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.170581 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.182535 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788"] Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.184782 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.189276 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.196324 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.198635 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534970-mjm9d"] Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.220167 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788"] Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.271508 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4cv\" (UniqueName: \"kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv\") pod \"auto-csr-approver-29534970-mjm9d\" (UID: \"6c7959ec-608e-497d-86c2-13070fe0d48c\") " pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.375865 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.376001 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbn6m\" (UniqueName: \"kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.376431 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.377678 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4cv\" (UniqueName: \"kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv\") pod \"auto-csr-approver-29534970-mjm9d\" (UID: \"6c7959ec-608e-497d-86c2-13070fe0d48c\") " pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.402818 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4cv\" (UniqueName: \"kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv\") pod \"auto-csr-approver-29534970-mjm9d\" (UID: \"6c7959ec-608e-497d-86c2-13070fe0d48c\") " pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.480976 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.481437 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbn6m\" (UniqueName: \"kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.481559 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.482102 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.488280 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.493300 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.501402 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbn6m\" (UniqueName: \"kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m\") pod \"collect-profiles-29534970-9r788\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:00 crc kubenswrapper[4741]: I0226 09:30:00.505776 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:01 crc kubenswrapper[4741]: I0226 09:30:01.747686 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534970-mjm9d"] Feb 26 09:30:01 crc kubenswrapper[4741]: I0226 09:30:01.939018 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788"] Feb 26 09:30:02 crc kubenswrapper[4741]: I0226 09:30:02.439204 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" event={"ID":"6c7959ec-608e-497d-86c2-13070fe0d48c","Type":"ContainerStarted","Data":"107a9b980e503debd5e3a1ce1759eccc4027da0d9853f8d54914806f925a52fc"} Feb 26 09:30:02 crc kubenswrapper[4741]: I0226 09:30:02.441470 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" event={"ID":"c0d6cc6f-00dd-4901-8601-731de54c0028","Type":"ContainerStarted","Data":"cf2bf0e2fb6f0e5f44502da99cf160fcc9788c81fa997091ecb159e47db8059b"} Feb 26 09:30:02 crc kubenswrapper[4741]: I0226 09:30:02.441502 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" event={"ID":"c0d6cc6f-00dd-4901-8601-731de54c0028","Type":"ContainerStarted","Data":"596ab3cb58226ffe8c45c0d969548deab879366c26c826cc21ed54b8b3fb11b2"} Feb 26 09:30:02 crc kubenswrapper[4741]: I0226 09:30:02.472716 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" podStartSLOduration=2.472688745 podStartE2EDuration="2.472688745s" podCreationTimestamp="2026-02-26 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:30:02.458277546 +0000 UTC m=+4637.454214953" watchObservedRunningTime="2026-02-26 09:30:02.472688745 +0000 UTC m=+4637.468626132" Feb 26 09:30:03 crc kubenswrapper[4741]: I0226 09:30:03.465635 4741 generic.go:334] "Generic (PLEG): container finished" podID="c0d6cc6f-00dd-4901-8601-731de54c0028" containerID="cf2bf0e2fb6f0e5f44502da99cf160fcc9788c81fa997091ecb159e47db8059b" exitCode=0 Feb 26 09:30:03 crc kubenswrapper[4741]: I0226 09:30:03.465705 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" event={"ID":"c0d6cc6f-00dd-4901-8601-731de54c0028","Type":"ContainerDied","Data":"cf2bf0e2fb6f0e5f44502da99cf160fcc9788c81fa997091ecb159e47db8059b"} Feb 26 09:30:04 crc kubenswrapper[4741]: I0226 09:30:04.481426 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" event={"ID":"6c7959ec-608e-497d-86c2-13070fe0d48c","Type":"ContainerStarted","Data":"f2f8904bca77b71ad7ceb3e0acfbf77ac936d7d5d8ae19dfaba441fe1d1aab19"} Feb 26 09:30:04 crc kubenswrapper[4741]: I0226 09:30:04.506585 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" podStartSLOduration=2.20356052 podStartE2EDuration="4.506525805s" podCreationTimestamp="2026-02-26 09:30:00 +0000 UTC" firstStartedPulling="2026-02-26 09:30:01.741690687 +0000 UTC m=+4636.737628074" lastFinishedPulling="2026-02-26 09:30:04.044655972 +0000 UTC m=+4639.040593359" observedRunningTime="2026-02-26 09:30:04.495613275 +0000 UTC m=+4639.491550662" watchObservedRunningTime="2026-02-26 09:30:04.506525805 +0000 UTC m=+4639.502463192" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.055404 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.240795 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume\") pod \"c0d6cc6f-00dd-4901-8601-731de54c0028\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.240859 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume\") pod \"c0d6cc6f-00dd-4901-8601-731de54c0028\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.241353 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbn6m\" (UniqueName: \"kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m\") pod \"c0d6cc6f-00dd-4901-8601-731de54c0028\" (UID: \"c0d6cc6f-00dd-4901-8601-731de54c0028\") " Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.241635 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0d6cc6f-00dd-4901-8601-731de54c0028" (UID: "c0d6cc6f-00dd-4901-8601-731de54c0028"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.243233 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d6cc6f-00dd-4901-8601-731de54c0028-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.247135 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m" (OuterVolumeSpecName: "kube-api-access-kbn6m") pod "c0d6cc6f-00dd-4901-8601-731de54c0028" (UID: "c0d6cc6f-00dd-4901-8601-731de54c0028"). InnerVolumeSpecName "kube-api-access-kbn6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.249068 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0d6cc6f-00dd-4901-8601-731de54c0028" (UID: "c0d6cc6f-00dd-4901-8601-731de54c0028"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.346373 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbn6m\" (UniqueName: \"kubernetes.io/projected/c0d6cc6f-00dd-4901-8601-731de54c0028-kube-api-access-kbn6m\") on node \"crc\" DevicePath \"\"" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.346422 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0d6cc6f-00dd-4901-8601-731de54c0028-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.496503 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" event={"ID":"c0d6cc6f-00dd-4901-8601-731de54c0028","Type":"ContainerDied","Data":"596ab3cb58226ffe8c45c0d969548deab879366c26c826cc21ed54b8b3fb11b2"} Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.496555 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="596ab3cb58226ffe8c45c0d969548deab879366c26c826cc21ed54b8b3fb11b2" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.496631 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-9r788" Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.505310 4741 generic.go:334] "Generic (PLEG): container finished" podID="6c7959ec-608e-497d-86c2-13070fe0d48c" containerID="f2f8904bca77b71ad7ceb3e0acfbf77ac936d7d5d8ae19dfaba441fe1d1aab19" exitCode=0 Feb 26 09:30:05 crc kubenswrapper[4741]: I0226 09:30:05.505359 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" event={"ID":"6c7959ec-608e-497d-86c2-13070fe0d48c","Type":"ContainerDied","Data":"f2f8904bca77b71ad7ceb3e0acfbf77ac936d7d5d8ae19dfaba441fe1d1aab19"} Feb 26 09:30:06 crc kubenswrapper[4741]: I0226 09:30:06.162594 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp"] Feb 26 09:30:06 crc kubenswrapper[4741]: I0226 09:30:06.175041 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534925-5wwfp"] Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.027728 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.205048 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4cv\" (UniqueName: \"kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv\") pod \"6c7959ec-608e-497d-86c2-13070fe0d48c\" (UID: \"6c7959ec-608e-497d-86c2-13070fe0d48c\") " Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.213745 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv" (OuterVolumeSpecName: "kube-api-access-6m4cv") pod "6c7959ec-608e-497d-86c2-13070fe0d48c" (UID: "6c7959ec-608e-497d-86c2-13070fe0d48c"). InnerVolumeSpecName "kube-api-access-6m4cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.309633 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4cv\" (UniqueName: \"kubernetes.io/projected/6c7959ec-608e-497d-86c2-13070fe0d48c-kube-api-access-6m4cv\") on node \"crc\" DevicePath \"\"" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.540185 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" event={"ID":"6c7959ec-608e-497d-86c2-13070fe0d48c","Type":"ContainerDied","Data":"107a9b980e503debd5e3a1ce1759eccc4027da0d9853f8d54914806f925a52fc"} Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.540237 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="107a9b980e503debd5e3a1ce1759eccc4027da0d9853f8d54914806f925a52fc" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.540284 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534970-mjm9d" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.596005 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534964-8snt8"] Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.618726 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534964-8snt8"] Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.806979 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f06202-eab6-4057-a11f-1e003e1f60bc" path="/var/lib/kubelet/pods/36f06202-eab6-4057-a11f-1e003e1f60bc/volumes" Feb 26 09:30:07 crc kubenswrapper[4741]: I0226 09:30:07.810298 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66e0b27-0f2d-4c98-97c6-383c5e15e23f" path="/var/lib/kubelet/pods/e66e0b27-0f2d-4c98-97c6-383c5e15e23f/volumes" Feb 26 09:30:12 crc kubenswrapper[4741]: I0226 09:30:12.786879 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:30:12 crc kubenswrapper[4741]: E0226 09:30:12.789365 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:30:25 crc kubenswrapper[4741]: I0226 09:30:25.801153 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:30:25 crc kubenswrapper[4741]: E0226 09:30:25.803193 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:30:38 crc kubenswrapper[4741]: I0226 09:30:38.788350 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:30:38 crc kubenswrapper[4741]: E0226 09:30:38.789451 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:30:53 crc kubenswrapper[4741]: I0226 09:30:53.788980 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:30:53 crc kubenswrapper[4741]: E0226 09:30:53.789884 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:31:04 crc kubenswrapper[4741]: I0226 09:31:04.648727 4741 scope.go:117] "RemoveContainer" containerID="c4aed9d81bba654b861b4082f35d098fca9d4e0506889bd476eea050316c3fce" Feb 26 09:31:04 crc kubenswrapper[4741]: I0226 09:31:04.686425 4741 scope.go:117] "RemoveContainer" containerID="8468538425f0b57c050043cfe60c73ce9447f1906a1c4aebef6f91ca4ece7964" Feb 26 09:31:05 crc kubenswrapper[4741]: I0226 09:31:05.796346 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:31:05 crc kubenswrapper[4741]: E0226 09:31:05.797167 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.253239 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:17 crc kubenswrapper[4741]: E0226 09:31:17.254408 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7959ec-608e-497d-86c2-13070fe0d48c" containerName="oc" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.254430 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7959ec-608e-497d-86c2-13070fe0d48c" containerName="oc" Feb 26 09:31:17 crc kubenswrapper[4741]: E0226 09:31:17.254503 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d6cc6f-00dd-4901-8601-731de54c0028" containerName="collect-profiles" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.254513 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d6cc6f-00dd-4901-8601-731de54c0028" containerName="collect-profiles" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.254776 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7959ec-608e-497d-86c2-13070fe0d48c" containerName="oc" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.254822 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d6cc6f-00dd-4901-8601-731de54c0028" containerName="collect-profiles" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.257302 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.274275 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.327204 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.327251 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99l57\" (UniqueName: \"kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.327510 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.429572 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.429748 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.429781 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99l57\" (UniqueName: \"kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.430266 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.430448 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.455319 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99l57\" (UniqueName: \"kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57\") pod \"community-operators-jxm2z\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:17 crc kubenswrapper[4741]: I0226 09:31:17.581446 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:18 crc kubenswrapper[4741]: I0226 09:31:18.257010 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:18 crc kubenswrapper[4741]: I0226 09:31:18.492784 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerStarted","Data":"97fc7152526452a93d39762dc641ea6a9bc653ce8ec254bea1d55e4ded134825"} Feb 26 09:31:19 crc kubenswrapper[4741]: I0226 09:31:19.509825 4741 generic.go:334] "Generic (PLEG): container finished" podID="56a67746-9c73-483b-876c-041bca1045c9" containerID="c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5" exitCode=0 Feb 26 09:31:19 crc kubenswrapper[4741]: I0226 09:31:19.509925 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerDied","Data":"c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5"} Feb 26 09:31:20 crc kubenswrapper[4741]: I0226 09:31:20.533307 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerStarted","Data":"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114"} Feb 26 09:31:20 crc kubenswrapper[4741]: I0226 09:31:20.788088 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:31:20 crc kubenswrapper[4741]: E0226 09:31:20.788427 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:31:22 crc kubenswrapper[4741]: I0226 09:31:22.563235 4741 generic.go:334] "Generic (PLEG): container finished" podID="56a67746-9c73-483b-876c-041bca1045c9" containerID="885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114" exitCode=0 Feb 26 09:31:22 crc kubenswrapper[4741]: I0226 09:31:22.563807 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerDied","Data":"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114"} Feb 26 09:31:23 crc kubenswrapper[4741]: I0226 09:31:23.581939 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerStarted","Data":"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766"} Feb 26 09:31:23 crc kubenswrapper[4741]: I0226 09:31:23.605454 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxm2z" podStartSLOduration=3.154672064 podStartE2EDuration="6.605430721s" podCreationTimestamp="2026-02-26 09:31:17 +0000 UTC" firstStartedPulling="2026-02-26 09:31:19.512506741 +0000 UTC m=+4714.508444128" lastFinishedPulling="2026-02-26 09:31:22.963265398 +0000 UTC m=+4717.959202785" observedRunningTime="2026-02-26 09:31:23.601099388 +0000 UTC m=+4718.597036775" watchObservedRunningTime="2026-02-26 09:31:23.605430721 +0000 UTC m=+4718.601368108" Feb 26 09:31:27 crc kubenswrapper[4741]: I0226 09:31:27.583148 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:27 crc kubenswrapper[4741]: I0226 09:31:27.585096 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:28 crc kubenswrapper[4741]: I0226 09:31:28.655687 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jxm2z" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="registry-server" probeResult="failure" output=< Feb 26 09:31:28 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:31:28 crc kubenswrapper[4741]: > Feb 26 09:31:31 crc kubenswrapper[4741]: I0226 09:31:31.787883 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:31:31 crc kubenswrapper[4741]: E0226 09:31:31.788649 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:31:37 crc kubenswrapper[4741]: I0226 09:31:37.636768 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:37 crc kubenswrapper[4741]: I0226 09:31:37.704486 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.160229 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.161081 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxm2z" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="registry-server" containerID="cri-o://84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766" gracePeriod=2 Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.761952 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.854602 4741 generic.go:334] "Generic (PLEG): container finished" podID="56a67746-9c73-483b-876c-041bca1045c9" containerID="84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766" exitCode=0 Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.854658 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerDied","Data":"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766"} Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.855344 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxm2z" event={"ID":"56a67746-9c73-483b-876c-041bca1045c9","Type":"ContainerDied","Data":"97fc7152526452a93d39762dc641ea6a9bc653ce8ec254bea1d55e4ded134825"} Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.855377 4741 scope.go:117] "RemoveContainer" containerID="84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.854775 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxm2z" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.887587 4741 scope.go:117] "RemoveContainer" containerID="885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.914334 4741 scope.go:117] "RemoveContainer" containerID="c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5" Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.963597 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities\") pod \"56a67746-9c73-483b-876c-041bca1045c9\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.964005 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content\") pod \"56a67746-9c73-483b-876c-041bca1045c9\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.964090 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99l57\" (UniqueName: \"kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57\") pod \"56a67746-9c73-483b-876c-041bca1045c9\" (UID: \"56a67746-9c73-483b-876c-041bca1045c9\") " Feb 26 09:31:41 crc kubenswrapper[4741]: I0226 09:31:41.967748 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities" (OuterVolumeSpecName: "utilities") pod "56a67746-9c73-483b-876c-041bca1045c9" (UID: "56a67746-9c73-483b-876c-041bca1045c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.018795 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56a67746-9c73-483b-876c-041bca1045c9" (UID: "56a67746-9c73-483b-876c-041bca1045c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.076216 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.076293 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a67746-9c73-483b-876c-041bca1045c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.604327 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57" (OuterVolumeSpecName: "kube-api-access-99l57") pod "56a67746-9c73-483b-876c-041bca1045c9" (UID: "56a67746-9c73-483b-876c-041bca1045c9"). InnerVolumeSpecName "kube-api-access-99l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.691606 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99l57\" (UniqueName: \"kubernetes.io/projected/56a67746-9c73-483b-876c-041bca1045c9-kube-api-access-99l57\") on node \"crc\" DevicePath \"\"" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.729096 4741 scope.go:117] "RemoveContainer" containerID="84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766" Feb 26 09:31:42 crc kubenswrapper[4741]: E0226 09:31:42.731572 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766\": container with ID starting with 84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766 not found: ID does not exist" containerID="84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.731651 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766"} err="failed to get container status \"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766\": rpc error: code = NotFound desc = could not find container \"84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766\": container with ID starting with 84fb1a694a437f5d52ad157bcea37963049507f1c16a6a87e6e55626d0e0f766 not found: ID does not exist" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.731701 4741 scope.go:117] "RemoveContainer" containerID="885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114" Feb 26 09:31:42 crc kubenswrapper[4741]: E0226 09:31:42.734163 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114\": container with ID starting with 885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114 not found: ID does not exist" containerID="885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.734293 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114"} err="failed to get container status \"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114\": rpc error: code = NotFound desc = could not find container \"885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114\": container with ID starting with 885409b4b36981e43519f4a38959fd7b0cc4e14ee94674cbbbeaacde25e72114 not found: ID does not exist" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.734321 4741 scope.go:117] "RemoveContainer" containerID="c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5" Feb 26 09:31:42 crc kubenswrapper[4741]: E0226 09:31:42.736362 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5\": container with ID starting with c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5 not found: ID does not exist" containerID="c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.736405 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5"} err="failed to get container status \"c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5\": rpc error: code = NotFound desc = could not find container \"c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5\": container with ID starting with c8a32f215e96fc47532649f60dfe7d37aaa3098d6088eb0c3dc956fd20e43bd5 not found: ID does not exist" Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.813881 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:42 crc kubenswrapper[4741]: I0226 09:31:42.830030 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxm2z"] Feb 26 09:31:43 crc kubenswrapper[4741]: I0226 09:31:43.812083 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a67746-9c73-483b-876c-041bca1045c9" path="/var/lib/kubelet/pods/56a67746-9c73-483b-876c-041bca1045c9/volumes" Feb 26 09:31:45 crc kubenswrapper[4741]: I0226 09:31:45.799218 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:31:45 crc kubenswrapper[4741]: E0226 09:31:45.800854 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.158698 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534972-cvmcx"] Feb 26 09:32:00 crc kubenswrapper[4741]: E0226 09:32:00.159984 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="extract-content" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.160003 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="extract-content" Feb 26 09:32:00 crc kubenswrapper[4741]: E0226 09:32:00.160032 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="registry-server" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.160038 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="registry-server" Feb 26 09:32:00 crc kubenswrapper[4741]: E0226 09:32:00.160065 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="extract-utilities" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.160071 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="extract-utilities" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.160342 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a67746-9c73-483b-876c-041bca1045c9" containerName="registry-server" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.161413 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.163777 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.164584 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.164640 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.175150 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534972-cvmcx"] Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.327220 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9nn\" (UniqueName: \"kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn\") pod \"auto-csr-approver-29534972-cvmcx\" (UID: \"65114173-847d-4dae-ace2-e4129e1c9000\") " pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.429809 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9nn\" (UniqueName: \"kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn\") pod \"auto-csr-approver-29534972-cvmcx\" (UID: \"65114173-847d-4dae-ace2-e4129e1c9000\") " pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.451274 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9nn\" (UniqueName: \"kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn\") pod \"auto-csr-approver-29534972-cvmcx\" (UID: \"65114173-847d-4dae-ace2-e4129e1c9000\") " pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.482301 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:00 crc kubenswrapper[4741]: I0226 09:32:00.787885 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:32:01 crc kubenswrapper[4741]: W0226 09:32:01.034829 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65114173_847d_4dae_ace2_e4129e1c9000.slice/crio-3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14 WatchSource:0}: Error finding container 3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14: Status 404 returned error can't find the container with id 3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14 Feb 26 09:32:01 crc kubenswrapper[4741]: I0226 09:32:01.035919 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534972-cvmcx"] Feb 26 09:32:01 crc kubenswrapper[4741]: I0226 09:32:01.109882 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518"} Feb 26 09:32:01 crc kubenswrapper[4741]: I0226 09:32:01.111645 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" event={"ID":"65114173-847d-4dae-ace2-e4129e1c9000","Type":"ContainerStarted","Data":"3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14"} Feb 26 09:32:03 crc kubenswrapper[4741]: I0226 09:32:03.141789 4741 generic.go:334] "Generic (PLEG): container finished" podID="65114173-847d-4dae-ace2-e4129e1c9000" containerID="89699ba4ff35ac9a669ca4fa5a22bd72412ee1f47018ff53d5f63149fc6091e3" exitCode=0 Feb 26 09:32:03 crc kubenswrapper[4741]: I0226 09:32:03.141896 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" event={"ID":"65114173-847d-4dae-ace2-e4129e1c9000","Type":"ContainerDied","Data":"89699ba4ff35ac9a669ca4fa5a22bd72412ee1f47018ff53d5f63149fc6091e3"} Feb 26 09:32:04 crc kubenswrapper[4741]: I0226 09:32:04.571621 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:04 crc kubenswrapper[4741]: I0226 09:32:04.732585 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m9nn\" (UniqueName: \"kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn\") pod \"65114173-847d-4dae-ace2-e4129e1c9000\" (UID: \"65114173-847d-4dae-ace2-e4129e1c9000\") " Feb 26 09:32:04 crc kubenswrapper[4741]: I0226 09:32:04.739468 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn" (OuterVolumeSpecName: "kube-api-access-8m9nn") pod "65114173-847d-4dae-ace2-e4129e1c9000" (UID: "65114173-847d-4dae-ace2-e4129e1c9000"). InnerVolumeSpecName "kube-api-access-8m9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:32:04 crc kubenswrapper[4741]: I0226 09:32:04.837746 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m9nn\" (UniqueName: \"kubernetes.io/projected/65114173-847d-4dae-ace2-e4129e1c9000-kube-api-access-8m9nn\") on node \"crc\" DevicePath \"\"" Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.176356 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" event={"ID":"65114173-847d-4dae-ace2-e4129e1c9000","Type":"ContainerDied","Data":"3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14"} Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.176622 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b42fe4d2e0faec0db2b858918465a2633229b265033b051958e4ccd463f9c14" Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.176448 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534972-cvmcx" Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.711359 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534966-b7nls"] Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.728143 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534966-b7nls"] Feb 26 09:32:05 crc kubenswrapper[4741]: I0226 09:32:05.849263 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecab1ab-0c41-46cd-bc17-016aa9712b46" path="/var/lib/kubelet/pods/8ecab1ab-0c41-46cd-bc17-016aa9712b46/volumes" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.301258 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:32:55 crc kubenswrapper[4741]: E0226 09:32:55.302546 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65114173-847d-4dae-ace2-e4129e1c9000" containerName="oc" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.302567 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="65114173-847d-4dae-ace2-e4129e1c9000" containerName="oc" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.302848 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="65114173-847d-4dae-ace2-e4129e1c9000" containerName="oc" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.305211 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.326708 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.368681 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.368735 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmf2x\" (UniqueName: \"kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.369034 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.471784 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.471865 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.471889 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmf2x\" (UniqueName: \"kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.472661 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.472798 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.503354 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmf2x\" (UniqueName: \"kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x\") pod \"redhat-operators-7kpm5\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:55 crc kubenswrapper[4741]: I0226 09:32:55.638640 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:32:56 crc kubenswrapper[4741]: I0226 09:32:56.266294 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:32:56 crc kubenswrapper[4741]: I0226 09:32:56.851058 4741 generic.go:334] "Generic (PLEG): container finished" podID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerID="65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00" exitCode=0 Feb 26 09:32:56 crc kubenswrapper[4741]: I0226 09:32:56.851179 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerDied","Data":"65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00"} Feb 26 09:32:56 crc kubenswrapper[4741]: I0226 09:32:56.851381 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerStarted","Data":"26ec1fbe751b0b5746aaa54b54cf6592b656c8b7c1118c5e82bf3e04e65305a5"} Feb 26 09:32:56 crc kubenswrapper[4741]: I0226 09:32:56.853905 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:32:58 crc kubenswrapper[4741]: I0226 09:32:58.889452 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerStarted","Data":"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7"} Feb 26 09:33:04 crc kubenswrapper[4741]: I0226 09:33:04.897731 4741 scope.go:117] "RemoveContainer" containerID="b482827d4551292d5bc50744530eaaf3956375f93f43782b73b38dd94116c69c" Feb 26 09:33:04 crc kubenswrapper[4741]: I0226 09:33:04.963160 4741 generic.go:334] "Generic (PLEG): container finished" podID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerID="82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7" exitCode=0 Feb 26 09:33:04 crc kubenswrapper[4741]: I0226 09:33:04.963226 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerDied","Data":"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7"} Feb 26 09:33:05 crc kubenswrapper[4741]: I0226 09:33:05.983548 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerStarted","Data":"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927"} Feb 26 09:33:06 crc kubenswrapper[4741]: I0226 09:33:06.022128 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kpm5" podStartSLOduration=2.470838762 podStartE2EDuration="11.02208058s" podCreationTimestamp="2026-02-26 09:32:55 +0000 UTC" firstStartedPulling="2026-02-26 09:32:56.853630576 +0000 UTC m=+4811.849567953" lastFinishedPulling="2026-02-26 09:33:05.404872374 +0000 UTC m=+4820.400809771" observedRunningTime="2026-02-26 09:33:06.013398723 +0000 UTC m=+4821.009336110" watchObservedRunningTime="2026-02-26 09:33:06.02208058 +0000 UTC m=+4821.018017967" Feb 26 09:33:15 crc kubenswrapper[4741]: I0226 09:33:15.638851 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:15 crc kubenswrapper[4741]: I0226 09:33:15.639499 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:16 crc kubenswrapper[4741]: I0226 09:33:16.691178 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kpm5" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" probeResult="failure" output=< Feb 26 09:33:16 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:33:16 crc kubenswrapper[4741]: > Feb 26 09:33:26 crc kubenswrapper[4741]: I0226 09:33:26.693922 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kpm5" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" probeResult="failure" output=< Feb 26 09:33:26 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:33:26 crc kubenswrapper[4741]: > Feb 26 09:33:36 crc kubenswrapper[4741]: I0226 09:33:36.694784 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kpm5" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" probeResult="failure" output=< Feb 26 09:33:36 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:33:36 crc kubenswrapper[4741]: > Feb 26 09:33:45 crc kubenswrapper[4741]: I0226 09:33:45.692439 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:45 crc kubenswrapper[4741]: I0226 09:33:45.750746 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:45 crc kubenswrapper[4741]: I0226 09:33:45.937271 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:33:47 crc kubenswrapper[4741]: I0226 09:33:47.492966 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kpm5" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" containerID="cri-o://fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927" gracePeriod=2 Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.106978 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.265185 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmf2x\" (UniqueName: \"kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x\") pod \"292406f6-16d6-45fe-8d8f-1854c4171c76\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.265572 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities\") pod \"292406f6-16d6-45fe-8d8f-1854c4171c76\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.265747 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content\") pod \"292406f6-16d6-45fe-8d8f-1854c4171c76\" (UID: \"292406f6-16d6-45fe-8d8f-1854c4171c76\") " Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.266581 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities" (OuterVolumeSpecName: "utilities") pod "292406f6-16d6-45fe-8d8f-1854c4171c76" (UID: "292406f6-16d6-45fe-8d8f-1854c4171c76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.267290 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.273856 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x" (OuterVolumeSpecName: "kube-api-access-mmf2x") pod "292406f6-16d6-45fe-8d8f-1854c4171c76" (UID: "292406f6-16d6-45fe-8d8f-1854c4171c76"). InnerVolumeSpecName "kube-api-access-mmf2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.371002 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmf2x\" (UniqueName: \"kubernetes.io/projected/292406f6-16d6-45fe-8d8f-1854c4171c76-kube-api-access-mmf2x\") on node \"crc\" DevicePath \"\"" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.408438 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "292406f6-16d6-45fe-8d8f-1854c4171c76" (UID: "292406f6-16d6-45fe-8d8f-1854c4171c76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.474986 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292406f6-16d6-45fe-8d8f-1854c4171c76-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.507812 4741 generic.go:334] "Generic (PLEG): container finished" podID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerID="fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927" exitCode=0 Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.507865 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerDied","Data":"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927"} Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.507906 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpm5" event={"ID":"292406f6-16d6-45fe-8d8f-1854c4171c76","Type":"ContainerDied","Data":"26ec1fbe751b0b5746aaa54b54cf6592b656c8b7c1118c5e82bf3e04e65305a5"} Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.507912 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpm5" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.507931 4741 scope.go:117] "RemoveContainer" containerID="fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.536947 4741 scope.go:117] "RemoveContainer" containerID="82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.566820 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.578102 4741 scope.go:117] "RemoveContainer" containerID="65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.579889 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kpm5"] Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.636621 4741 scope.go:117] "RemoveContainer" containerID="fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927" Feb 26 09:33:48 crc kubenswrapper[4741]: E0226 09:33:48.637612 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927\": container with ID starting with fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927 not found: ID does not exist" containerID="fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.637671 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927"} err="failed to get container status \"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927\": rpc error: code = NotFound desc = could not find container \"fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927\": container with ID starting with fd7b9cdc4335c763ee27be5e702e33c1d1ca4c3f3d05468f08e032c3b310a927 not found: ID does not exist" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.637710 4741 scope.go:117] "RemoveContainer" containerID="82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7" Feb 26 09:33:48 crc kubenswrapper[4741]: E0226 09:33:48.637992 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7\": container with ID starting with 82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7 not found: ID does not exist" containerID="82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.638057 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7"} err="failed to get container status \"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7\": rpc error: code = NotFound desc = could not find container \"82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7\": container with ID starting with 82611a66e6a8e2dddb6921ccb9bcb37bd1bb15bbc5e7ed712ec54babef616cf7 not found: ID does not exist" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.638077 4741 scope.go:117] "RemoveContainer" containerID="65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00" Feb 26 09:33:48 crc kubenswrapper[4741]: E0226 09:33:48.638589 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00\": container with ID starting with 65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00 not found: ID does not exist" containerID="65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00" Feb 26 09:33:48 crc kubenswrapper[4741]: I0226 09:33:48.638617 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00"} err="failed to get container status \"65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00\": rpc error: code = NotFound desc = could not find container \"65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00\": container with ID starting with 65fa952b5110bd006600faffd4581ffc4bd59d7f655fef7823d8ce2ba802ef00 not found: ID does not exist" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.555764 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:33:49 crc kubenswrapper[4741]: E0226 09:33:49.557136 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="extract-utilities" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.557159 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="extract-utilities" Feb 26 09:33:49 crc kubenswrapper[4741]: E0226 09:33:49.557207 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="extract-content" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.557215 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="extract-content" Feb 26 09:33:49 crc kubenswrapper[4741]: E0226 09:33:49.557236 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.557244 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.557559 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" containerName="registry-server" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.559717 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.571046 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.713692 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtmn\" (UniqueName: \"kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.714395 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.714728 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.802944 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292406f6-16d6-45fe-8d8f-1854c4171c76" path="/var/lib/kubelet/pods/292406f6-16d6-45fe-8d8f-1854c4171c76/volumes" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.818418 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.818536 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtmn\" (UniqueName: \"kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.818659 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.819273 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:49 crc kubenswrapper[4741]: I0226 09:33:49.819523 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:50 crc kubenswrapper[4741]: I0226 09:33:50.503600 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtmn\" (UniqueName: \"kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn\") pod \"certified-operators-46z98\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:50 crc kubenswrapper[4741]: I0226 09:33:50.780783 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:33:51 crc kubenswrapper[4741]: I0226 09:33:51.294287 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:33:51 crc kubenswrapper[4741]: I0226 09:33:51.546930 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerStarted","Data":"a49f636cd8ae23d5d85e2d41615f5bc226df22864faa47c4198f4f69201873e3"} Feb 26 09:33:52 crc kubenswrapper[4741]: I0226 09:33:52.560648 4741 generic.go:334] "Generic (PLEG): container finished" podID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerID="f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45" exitCode=0 Feb 26 09:33:52 crc kubenswrapper[4741]: I0226 09:33:52.560932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerDied","Data":"f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45"} Feb 26 09:33:53 crc kubenswrapper[4741]: I0226 09:33:53.576309 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerStarted","Data":"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a"} Feb 26 09:33:54 crc kubenswrapper[4741]: I0226 09:33:54.592121 4741 generic.go:334] "Generic (PLEG): container finished" podID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerID="4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a" exitCode=0 Feb 26 09:33:54 crc kubenswrapper[4741]: I0226 09:33:54.592277 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerDied","Data":"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a"} Feb 26 09:33:55 crc kubenswrapper[4741]: I0226 09:33:55.607626 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerStarted","Data":"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0"} Feb 26 09:33:55 crc kubenswrapper[4741]: I0226 09:33:55.636222 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46z98" podStartSLOduration=4.218477927 podStartE2EDuration="6.636165549s" podCreationTimestamp="2026-02-26 09:33:49 +0000 UTC" firstStartedPulling="2026-02-26 09:33:52.563962312 +0000 UTC m=+4867.559899699" lastFinishedPulling="2026-02-26 09:33:54.981649934 +0000 UTC m=+4869.977587321" observedRunningTime="2026-02-26 09:33:55.628618795 +0000 UTC m=+4870.624556192" watchObservedRunningTime="2026-02-26 09:33:55.636165549 +0000 UTC m=+4870.632102936" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.174130 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534974-hwz24"] Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.177101 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.180170 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.180273 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.180165 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.195545 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534974-hwz24"] Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.345873 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kb9\" (UniqueName: \"kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9\") pod \"auto-csr-approver-29534974-hwz24\" (UID: \"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8\") " pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.449145 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kb9\" (UniqueName: \"kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9\") pod \"auto-csr-approver-29534974-hwz24\" (UID: \"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8\") " pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.479396 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kb9\" (UniqueName: \"kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9\") pod \"auto-csr-approver-29534974-hwz24\" (UID: \"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8\") " pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.511002 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.781293 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.781624 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:00 crc kubenswrapper[4741]: I0226 09:34:00.842398 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:01 crc kubenswrapper[4741]: I0226 09:34:01.017507 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534974-hwz24"] Feb 26 09:34:01 crc kubenswrapper[4741]: I0226 09:34:01.703425 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534974-hwz24" event={"ID":"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8","Type":"ContainerStarted","Data":"4841dbdaf2a7712b9365fed685bb6c3b8145fc8524215c5086159cd43bea79fd"} Feb 26 09:34:01 crc kubenswrapper[4741]: I0226 09:34:01.761872 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:01 crc kubenswrapper[4741]: I0226 09:34:01.829599 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:34:03 crc kubenswrapper[4741]: I0226 09:34:03.734744 4741 generic.go:334] "Generic (PLEG): container finished" podID="d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" containerID="de46e9ca1aae1d279cb4267e58405dce9481649bd64e5e3c8d8ce6706958d300" exitCode=0 Feb 26 09:34:03 crc kubenswrapper[4741]: I0226 09:34:03.735796 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-46z98" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="registry-server" containerID="cri-o://f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0" gracePeriod=2 Feb 26 09:34:03 crc kubenswrapper[4741]: I0226 09:34:03.736306 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534974-hwz24" event={"ID":"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8","Type":"ContainerDied","Data":"de46e9ca1aae1d279cb4267e58405dce9481649bd64e5e3c8d8ce6706958d300"} Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.495519 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.593204 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content\") pod \"58c4c719-78c0-4700-960b-0c4c17da8bf0\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.594026 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities\") pod \"58c4c719-78c0-4700-960b-0c4c17da8bf0\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.594170 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvtmn\" (UniqueName: \"kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn\") pod \"58c4c719-78c0-4700-960b-0c4c17da8bf0\" (UID: \"58c4c719-78c0-4700-960b-0c4c17da8bf0\") " Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.595672 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities" (OuterVolumeSpecName: "utilities") pod "58c4c719-78c0-4700-960b-0c4c17da8bf0" (UID: "58c4c719-78c0-4700-960b-0c4c17da8bf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.600831 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn" (OuterVolumeSpecName: "kube-api-access-jvtmn") pod "58c4c719-78c0-4700-960b-0c4c17da8bf0" (UID: "58c4c719-78c0-4700-960b-0c4c17da8bf0"). InnerVolumeSpecName "kube-api-access-jvtmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.697631 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.697937 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvtmn\" (UniqueName: \"kubernetes.io/projected/58c4c719-78c0-4700-960b-0c4c17da8bf0-kube-api-access-jvtmn\") on node \"crc\" DevicePath \"\"" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.749035 4741 generic.go:334] "Generic (PLEG): container finished" podID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerID="f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0" exitCode=0 Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.749143 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46z98" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.749135 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerDied","Data":"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0"} Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.749205 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46z98" event={"ID":"58c4c719-78c0-4700-960b-0c4c17da8bf0","Type":"ContainerDied","Data":"a49f636cd8ae23d5d85e2d41615f5bc226df22864faa47c4198f4f69201873e3"} Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.749227 4741 scope.go:117] "RemoveContainer" containerID="f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.772324 4741 scope.go:117] "RemoveContainer" containerID="4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.797287 4741 scope.go:117] "RemoveContainer" containerID="f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.885437 4741 scope.go:117] "RemoveContainer" containerID="f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0" Feb 26 09:34:04 crc kubenswrapper[4741]: E0226 09:34:04.888654 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0\": container with ID starting with f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0 not found: ID does not exist" containerID="f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.888701 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0"} err="failed to get container status \"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0\": rpc error: code = NotFound desc = could not find container \"f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0\": container with ID starting with f18db782b56b55d9a9df0c206cc1bd9f2b120e21c5e20ef9ce6370de873b8ee0 not found: ID does not exist" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.888732 4741 scope.go:117] "RemoveContainer" containerID="4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a" Feb 26 09:34:04 crc kubenswrapper[4741]: E0226 09:34:04.889991 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a\": container with ID starting with 4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a not found: ID does not exist" containerID="4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.890056 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a"} err="failed to get container status \"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a\": rpc error: code = NotFound desc = could not find container \"4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a\": container with ID starting with 4fe518718026fdc302452a54f07ac6b92627c436a826f868737935c76fd8f46a not found: ID does not exist" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.890097 4741 scope.go:117] "RemoveContainer" containerID="f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45" Feb 26 09:34:04 crc kubenswrapper[4741]: E0226 09:34:04.890742 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45\": container with ID starting with f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45 not found: ID does not exist" containerID="f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45" Feb 26 09:34:04 crc kubenswrapper[4741]: I0226 09:34:04.890807 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45"} err="failed to get container status \"f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45\": rpc error: code = NotFound desc = could not find container \"f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45\": container with ID starting with f6a2f3058abe5fb9442ca85e65bfd6b43a535a86e8a1ce53376633495af2ed45 not found: ID does not exist" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.302760 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.322752 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kb9\" (UniqueName: \"kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9\") pod \"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8\" (UID: \"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8\") " Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.329154 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9" (OuterVolumeSpecName: "kube-api-access-f9kb9") pod "d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" (UID: "d7c6c2c7-8db9-429c-994b-74d9ded9fcd8"). InnerVolumeSpecName "kube-api-access-f9kb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.392095 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58c4c719-78c0-4700-960b-0c4c17da8bf0" (UID: "58c4c719-78c0-4700-960b-0c4c17da8bf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.426772 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4c719-78c0-4700-960b-0c4c17da8bf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.426822 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kb9\" (UniqueName: \"kubernetes.io/projected/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8-kube-api-access-f9kb9\") on node \"crc\" DevicePath \"\"" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.690897 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.709086 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-46z98"] Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.761757 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534974-hwz24" event={"ID":"d7c6c2c7-8db9-429c-994b-74d9ded9fcd8","Type":"ContainerDied","Data":"4841dbdaf2a7712b9365fed685bb6c3b8145fc8524215c5086159cd43bea79fd"} Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.761812 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4841dbdaf2a7712b9365fed685bb6c3b8145fc8524215c5086159cd43bea79fd" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.761820 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534974-hwz24" Feb 26 09:34:05 crc kubenswrapper[4741]: I0226 09:34:05.806003 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" path="/var/lib/kubelet/pods/58c4c719-78c0-4700-960b-0c4c17da8bf0/volumes" Feb 26 09:34:06 crc kubenswrapper[4741]: I0226 09:34:06.411279 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534968-dllb4"] Feb 26 09:34:06 crc kubenswrapper[4741]: I0226 09:34:06.422785 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534968-dllb4"] Feb 26 09:34:07 crc kubenswrapper[4741]: I0226 09:34:07.803023 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e62028-917f-48e3-a0ad-07c33f0e67c0" path="/var/lib/kubelet/pods/72e62028-917f-48e3-a0ad-07c33f0e67c0/volumes" Feb 26 09:34:25 crc kubenswrapper[4741]: I0226 09:34:25.149182 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:34:25 crc kubenswrapper[4741]: I0226 09:34:25.149835 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:34:55 crc kubenswrapper[4741]: I0226 09:34:55.149329 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:34:55 crc kubenswrapper[4741]: I0226 09:34:55.150021 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:35:05 crc kubenswrapper[4741]: I0226 09:35:05.093866 4741 scope.go:117] "RemoveContainer" containerID="99cce85029b51712a8ab578ff9290821894a55aa11d0082e4e65b14e7afd335c" Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.148816 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.149411 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.149496 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.150800 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.150878 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518" gracePeriod=600 Feb 26 09:35:25 crc kubenswrapper[4741]: E0226 09:35:25.314463 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7b5b01_4061_4003_b002_a977260886c5.slice/crio-conmon-64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7b5b01_4061_4003_b002_a977260886c5.slice/crio-64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518.scope\": RecentStats: unable to find data in memory cache]" Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.865972 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518" exitCode=0 Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.866190 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518"} Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.866442 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4"} Feb 26 09:35:25 crc kubenswrapper[4741]: I0226 09:35:25.866481 4741 scope.go:117] "RemoveContainer" containerID="f49cccb842a1b8d4ea315c62d5534f0774ef70d1e160bad0794c9ad848320922" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.153517 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534976-p8lnt"] Feb 26 09:36:00 crc kubenswrapper[4741]: E0226 09:36:00.154611 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" containerName="oc" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.154628 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" containerName="oc" Feb 26 09:36:00 crc kubenswrapper[4741]: E0226 09:36:00.154645 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="extract-utilities" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.154651 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="extract-utilities" Feb 26 09:36:00 crc kubenswrapper[4741]: E0226 09:36:00.154683 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="extract-content" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.154690 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="extract-content" Feb 26 09:36:00 crc kubenswrapper[4741]: E0226 09:36:00.154714 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="registry-server" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.154720 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="registry-server" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.154991 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" containerName="oc" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.155014 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c4c719-78c0-4700-960b-0c4c17da8bf0" containerName="registry-server" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.155993 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.158499 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.158646 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.159763 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.176181 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534976-p8lnt"] Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.260069 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469\") pod \"auto-csr-approver-29534976-p8lnt\" (UID: \"3c470433-6812-4042-b1e1-f7b1d3aa4696\") " pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.363054 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469\") pod \"auto-csr-approver-29534976-p8lnt\" (UID: \"3c470433-6812-4042-b1e1-f7b1d3aa4696\") " pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.504072 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469\") pod \"auto-csr-approver-29534976-p8lnt\" (UID: \"3c470433-6812-4042-b1e1-f7b1d3aa4696\") " pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:00 crc kubenswrapper[4741]: I0226 09:36:00.782916 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:01 crc kubenswrapper[4741]: I0226 09:36:01.606064 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534976-p8lnt"] Feb 26 09:36:02 crc kubenswrapper[4741]: I0226 09:36:02.344995 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" event={"ID":"3c470433-6812-4042-b1e1-f7b1d3aa4696","Type":"ContainerStarted","Data":"9e3d0561b3ec08e586c7d5c56d6868ec492f87dbfa7dbfae4245b1e01b759bbf"} Feb 26 09:36:06 crc kubenswrapper[4741]: I0226 09:36:06.396132 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" event={"ID":"3c470433-6812-4042-b1e1-f7b1d3aa4696","Type":"ContainerStarted","Data":"c212b7d32a3d0c20e4822dc5d36ec107fa59561bb1c8a0602ef189b205b24424"} Feb 26 09:36:06 crc kubenswrapper[4741]: I0226 09:36:06.417734 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" podStartSLOduration=2.409225901 podStartE2EDuration="6.417699864s" podCreationTimestamp="2026-02-26 09:36:00 +0000 UTC" firstStartedPulling="2026-02-26 09:36:01.617353786 +0000 UTC m=+4996.613291173" lastFinishedPulling="2026-02-26 09:36:05.625827709 +0000 UTC m=+5000.621765136" observedRunningTime="2026-02-26 09:36:06.414305318 +0000 UTC m=+5001.410242705" watchObservedRunningTime="2026-02-26 09:36:06.417699864 +0000 UTC m=+5001.413637291" Feb 26 09:36:08 crc kubenswrapper[4741]: I0226 09:36:08.425018 4741 generic.go:334] "Generic (PLEG): container finished" podID="3c470433-6812-4042-b1e1-f7b1d3aa4696" containerID="c212b7d32a3d0c20e4822dc5d36ec107fa59561bb1c8a0602ef189b205b24424" exitCode=0 Feb 26 09:36:08 crc kubenswrapper[4741]: I0226 09:36:08.425128 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" event={"ID":"3c470433-6812-4042-b1e1-f7b1d3aa4696","Type":"ContainerDied","Data":"c212b7d32a3d0c20e4822dc5d36ec107fa59561bb1c8a0602ef189b205b24424"} Feb 26 09:36:09 crc kubenswrapper[4741]: I0226 09:36:09.876811 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:09 crc kubenswrapper[4741]: I0226 09:36:09.957259 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469\") pod \"3c470433-6812-4042-b1e1-f7b1d3aa4696\" (UID: \"3c470433-6812-4042-b1e1-f7b1d3aa4696\") " Feb 26 09:36:09 crc kubenswrapper[4741]: I0226 09:36:09.965620 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469" (OuterVolumeSpecName: "kube-api-access-nj469") pod "3c470433-6812-4042-b1e1-f7b1d3aa4696" (UID: "3c470433-6812-4042-b1e1-f7b1d3aa4696"). InnerVolumeSpecName "kube-api-access-nj469". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.060994 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/3c470433-6812-4042-b1e1-f7b1d3aa4696-kube-api-access-nj469\") on node \"crc\" DevicePath \"\"" Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.493510 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" event={"ID":"3c470433-6812-4042-b1e1-f7b1d3aa4696","Type":"ContainerDied","Data":"9e3d0561b3ec08e586c7d5c56d6868ec492f87dbfa7dbfae4245b1e01b759bbf"} Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.493571 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3d0561b3ec08e586c7d5c56d6868ec492f87dbfa7dbfae4245b1e01b759bbf" Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.493649 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534976-p8lnt" Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.559528 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534970-mjm9d"] Feb 26 09:36:10 crc kubenswrapper[4741]: I0226 09:36:10.577811 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534970-mjm9d"] Feb 26 09:36:11 crc kubenswrapper[4741]: I0226 09:36:11.804190 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7959ec-608e-497d-86c2-13070fe0d48c" path="/var/lib/kubelet/pods/6c7959ec-608e-497d-86c2-13070fe0d48c/volumes" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.977950 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 09:36:20 crc kubenswrapper[4741]: E0226 09:36:20.979496 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c470433-6812-4042-b1e1-f7b1d3aa4696" containerName="oc" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.979515 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c470433-6812-4042-b1e1-f7b1d3aa4696" containerName="oc" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.980214 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c470433-6812-4042-b1e1-f7b1d3aa4696" containerName="oc" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.981831 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.987035 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.987399 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.988702 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-948fk" Feb 26 09:36:20 crc kubenswrapper[4741]: I0226 09:36:20.990735 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.020995 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.105644 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.105718 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.105802 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.105989 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.106164 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.106365 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hlww\" (UniqueName: \"kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.106525 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.106725 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.106871 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210157 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210280 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hlww\" (UniqueName: \"kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210359 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210464 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210558 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210702 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210740 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210785 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.210975 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.211597 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.211878 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.212332 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.212408 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.212600 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.217300 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.217300 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.218442 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.233978 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hlww\" (UniqueName: \"kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.259886 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.319522 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 09:36:21 crc kubenswrapper[4741]: I0226 09:36:21.950539 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 09:36:22 crc kubenswrapper[4741]: I0226 09:36:22.651383 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5b00e4ed-4b6a-4871-b454-dec4229deb64","Type":"ContainerStarted","Data":"cc072564262ce9da373a546de1587207134972131a19af4b269699f21eeeafa7"} Feb 26 09:37:02 crc kubenswrapper[4741]: E0226 09:37:02.427606 4741 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 26 09:37:02 crc kubenswrapper[4741]: E0226 09:37:02.430360 4741 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hlww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5b00e4ed-4b6a-4871-b454-dec4229deb64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 09:37:02 crc kubenswrapper[4741]: E0226 09:37:02.432042 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5b00e4ed-4b6a-4871-b454-dec4229deb64" Feb 26 09:37:03 crc kubenswrapper[4741]: E0226 09:37:03.202209 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5b00e4ed-4b6a-4871-b454-dec4229deb64" Feb 26 09:37:05 crc kubenswrapper[4741]: I0226 09:37:05.210027 4741 scope.go:117] "RemoveContainer" containerID="f2f8904bca77b71ad7ceb3e0acfbf77ac936d7d5d8ae19dfaba441fe1d1aab19" Feb 26 09:37:17 crc kubenswrapper[4741]: I0226 09:37:17.327756 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 09:37:19 crc kubenswrapper[4741]: I0226 09:37:19.418981 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5b00e4ed-4b6a-4871-b454-dec4229deb64","Type":"ContainerStarted","Data":"130ffb6ef33dfc077043c0c737d6caf31aa7ae15c6596487ce4923fb710eb20a"} Feb 26 09:37:19 crc kubenswrapper[4741]: I0226 09:37:19.448718 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.072517622 podStartE2EDuration="1m0.448696142s" podCreationTimestamp="2026-02-26 09:36:19 +0000 UTC" firstStartedPulling="2026-02-26 09:36:21.948044857 +0000 UTC m=+5016.943982244" lastFinishedPulling="2026-02-26 09:37:17.324223377 +0000 UTC m=+5072.320160764" observedRunningTime="2026-02-26 09:37:19.442594999 +0000 UTC m=+5074.438532386" watchObservedRunningTime="2026-02-26 09:37:19.448696142 +0000 UTC m=+5074.444633529" Feb 26 09:37:25 crc kubenswrapper[4741]: I0226 09:37:25.150002 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:37:25 crc kubenswrapper[4741]: I0226 09:37:25.151027 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:37:55 crc kubenswrapper[4741]: I0226 09:37:55.149830 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:37:55 crc kubenswrapper[4741]: I0226 09:37:55.150348 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.379227 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534978-zjn67"] Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.382491 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.386713 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.391194 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.391889 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.419185 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534978-zjn67"] Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.423731 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxpk\" (UniqueName: \"kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk\") pod \"auto-csr-approver-29534978-zjn67\" (UID: \"bcee539a-3cf8-4355-b9b3-85a3ffcefe07\") " pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.530193 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qxpk\" (UniqueName: \"kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk\") pod \"auto-csr-approver-29534978-zjn67\" (UID: \"bcee539a-3cf8-4355-b9b3-85a3ffcefe07\") " pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.599879 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qxpk\" (UniqueName: \"kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk\") pod \"auto-csr-approver-29534978-zjn67\" (UID: \"bcee539a-3cf8-4355-b9b3-85a3ffcefe07\") " pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:00 crc kubenswrapper[4741]: I0226 09:38:00.713584 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:01 crc kubenswrapper[4741]: I0226 09:38:01.852903 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534978-zjn67"] Feb 26 09:38:01 crc kubenswrapper[4741]: I0226 09:38:01.892775 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:38:01 crc kubenswrapper[4741]: I0226 09:38:01.991949 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534978-zjn67" event={"ID":"bcee539a-3cf8-4355-b9b3-85a3ffcefe07","Type":"ContainerStarted","Data":"d44372d435c0cd26f4558f26ea4fbc83d3e48e94d3ecf2a4426f9532bd2797ce"} Feb 26 09:38:04 crc kubenswrapper[4741]: I0226 09:38:04.019449 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534978-zjn67" event={"ID":"bcee539a-3cf8-4355-b9b3-85a3ffcefe07","Type":"ContainerStarted","Data":"eebec0d89c7ef6f95dfac9110ed52b73e24fb6190eb59cc4fd7f080523bef4e8"} Feb 26 09:38:04 crc kubenswrapper[4741]: I0226 09:38:04.049317 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534978-zjn67" podStartSLOduration=3.158473924 podStartE2EDuration="4.04929454s" podCreationTimestamp="2026-02-26 09:38:00 +0000 UTC" firstStartedPulling="2026-02-26 09:38:01.890952093 +0000 UTC m=+5116.886889480" lastFinishedPulling="2026-02-26 09:38:02.781772709 +0000 UTC m=+5117.777710096" observedRunningTime="2026-02-26 09:38:04.038312959 +0000 UTC m=+5119.034250346" watchObservedRunningTime="2026-02-26 09:38:04.04929454 +0000 UTC m=+5119.045231927" Feb 26 09:38:06 crc kubenswrapper[4741]: I0226 09:38:06.046246 4741 generic.go:334] "Generic (PLEG): container finished" podID="bcee539a-3cf8-4355-b9b3-85a3ffcefe07" containerID="eebec0d89c7ef6f95dfac9110ed52b73e24fb6190eb59cc4fd7f080523bef4e8" exitCode=0 Feb 26 09:38:06 crc kubenswrapper[4741]: I0226 09:38:06.046364 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534978-zjn67" event={"ID":"bcee539a-3cf8-4355-b9b3-85a3ffcefe07","Type":"ContainerDied","Data":"eebec0d89c7ef6f95dfac9110ed52b73e24fb6190eb59cc4fd7f080523bef4e8"} Feb 26 09:38:07 crc kubenswrapper[4741]: I0226 09:38:07.826952 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:07 crc kubenswrapper[4741]: I0226 09:38:07.884967 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qxpk\" (UniqueName: \"kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk\") pod \"bcee539a-3cf8-4355-b9b3-85a3ffcefe07\" (UID: \"bcee539a-3cf8-4355-b9b3-85a3ffcefe07\") " Feb 26 09:38:07 crc kubenswrapper[4741]: I0226 09:38:07.898614 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk" (OuterVolumeSpecName: "kube-api-access-9qxpk") pod "bcee539a-3cf8-4355-b9b3-85a3ffcefe07" (UID: "bcee539a-3cf8-4355-b9b3-85a3ffcefe07"). InnerVolumeSpecName "kube-api-access-9qxpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:38:07 crc kubenswrapper[4741]: I0226 09:38:07.989572 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qxpk\" (UniqueName: \"kubernetes.io/projected/bcee539a-3cf8-4355-b9b3-85a3ffcefe07-kube-api-access-9qxpk\") on node \"crc\" DevicePath \"\"" Feb 26 09:38:08 crc kubenswrapper[4741]: I0226 09:38:08.089931 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534978-zjn67" event={"ID":"bcee539a-3cf8-4355-b9b3-85a3ffcefe07","Type":"ContainerDied","Data":"d44372d435c0cd26f4558f26ea4fbc83d3e48e94d3ecf2a4426f9532bd2797ce"} Feb 26 09:38:08 crc kubenswrapper[4741]: I0226 09:38:08.090042 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534978-zjn67" Feb 26 09:38:08 crc kubenswrapper[4741]: I0226 09:38:08.090689 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44372d435c0cd26f4558f26ea4fbc83d3e48e94d3ecf2a4426f9532bd2797ce" Feb 26 09:38:08 crc kubenswrapper[4741]: I0226 09:38:08.166694 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534972-cvmcx"] Feb 26 09:38:08 crc kubenswrapper[4741]: I0226 09:38:08.178510 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534972-cvmcx"] Feb 26 09:38:09 crc kubenswrapper[4741]: I0226 09:38:09.804152 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65114173-847d-4dae-ace2-e4129e1c9000" path="/var/lib/kubelet/pods/65114173-847d-4dae-ace2-e4129e1c9000/volumes" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.803220 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:17 crc kubenswrapper[4741]: E0226 09:38:17.810962 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcee539a-3cf8-4355-b9b3-85a3ffcefe07" containerName="oc" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.811299 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcee539a-3cf8-4355-b9b3-85a3ffcefe07" containerName="oc" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.812030 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcee539a-3cf8-4355-b9b3-85a3ffcefe07" containerName="oc" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.822896 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.823088 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.887418 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.887608 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.891619 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsff2\" (UniqueName: \"kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.995060 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.995154 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.995317 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsff2\" (UniqueName: \"kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:17 crc kubenswrapper[4741]: I0226 09:38:17.995599 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:18 crc kubenswrapper[4741]: I0226 09:38:17.999461 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:18 crc kubenswrapper[4741]: I0226 09:38:18.404425 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsff2\" (UniqueName: \"kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2\") pod \"redhat-marketplace-sjbnp\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:18 crc kubenswrapper[4741]: I0226 09:38:18.451248 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:19 crc kubenswrapper[4741]: I0226 09:38:19.121079 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:19 crc kubenswrapper[4741]: I0226 09:38:19.242873 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerStarted","Data":"8c083ee6e32717ae559635bff594a3b97f0cdfc8861c408309b668921e186fdc"} Feb 26 09:38:20 crc kubenswrapper[4741]: I0226 09:38:20.257995 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerID="e998f1baf528e0218b47bc9ec1347c17112c4ef0bda7f401ff6d357a3e8f2577" exitCode=0 Feb 26 09:38:20 crc kubenswrapper[4741]: I0226 09:38:20.258320 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerDied","Data":"e998f1baf528e0218b47bc9ec1347c17112c4ef0bda7f401ff6d357a3e8f2577"} Feb 26 09:38:21 crc kubenswrapper[4741]: I0226 09:38:21.273153 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerStarted","Data":"b968debb9654778a93edc13f00a9232cc9657a605999d8bf3e75b43ec057f22f"} Feb 26 09:38:23 crc kubenswrapper[4741]: I0226 09:38:23.296897 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerID="b968debb9654778a93edc13f00a9232cc9657a605999d8bf3e75b43ec057f22f" exitCode=0 Feb 26 09:38:23 crc kubenswrapper[4741]: I0226 09:38:23.296975 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerDied","Data":"b968debb9654778a93edc13f00a9232cc9657a605999d8bf3e75b43ec057f22f"} Feb 26 09:38:24 crc kubenswrapper[4741]: I0226 09:38:24.313438 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerStarted","Data":"90e486a62eb997016db1f716a32f11d4c02bf1cc227b3cbb16c518ac2e5800c4"} Feb 26 09:38:24 crc kubenswrapper[4741]: I0226 09:38:24.361995 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjbnp" podStartSLOduration=3.8987080389999997 podStartE2EDuration="7.36195931s" podCreationTimestamp="2026-02-26 09:38:17 +0000 UTC" firstStartedPulling="2026-02-26 09:38:20.261422503 +0000 UTC m=+5135.257359890" lastFinishedPulling="2026-02-26 09:38:23.724673774 +0000 UTC m=+5138.720611161" observedRunningTime="2026-02-26 09:38:24.333486391 +0000 UTC m=+5139.329423798" watchObservedRunningTime="2026-02-26 09:38:24.36195931 +0000 UTC m=+5139.357896697" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.149139 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.149404 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.149456 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.151158 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.151586 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" gracePeriod=600 Feb 26 09:38:25 crc kubenswrapper[4741]: E0226 09:38:25.296650 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.327192 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" exitCode=0 Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.327243 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4"} Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.327290 4741 scope.go:117] "RemoveContainer" containerID="64c069b554b565268c509c561ac65dd91439859fc90f66377d7b7cbce0b3b518" Feb 26 09:38:25 crc kubenswrapper[4741]: I0226 09:38:25.328347 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:38:25 crc kubenswrapper[4741]: E0226 09:38:25.328729 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:38:28 crc kubenswrapper[4741]: I0226 09:38:28.452315 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:28 crc kubenswrapper[4741]: I0226 09:38:28.452879 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:29 crc kubenswrapper[4741]: I0226 09:38:29.524686 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sjbnp" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="registry-server" probeResult="failure" output=< Feb 26 09:38:29 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:38:29 crc kubenswrapper[4741]: > Feb 26 09:38:38 crc kubenswrapper[4741]: I0226 09:38:38.574397 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:38 crc kubenswrapper[4741]: I0226 09:38:38.634736 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:38 crc kubenswrapper[4741]: I0226 09:38:38.741822 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:38 crc kubenswrapper[4741]: I0226 09:38:38.791431 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:38:38 crc kubenswrapper[4741]: E0226 09:38:38.794501 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:38:40 crc kubenswrapper[4741]: I0226 09:38:40.547449 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjbnp" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="registry-server" containerID="cri-o://90e486a62eb997016db1f716a32f11d4c02bf1cc227b3cbb16c518ac2e5800c4" gracePeriod=2 Feb 26 09:38:41 crc kubenswrapper[4741]: I0226 09:38:41.562273 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerDied","Data":"90e486a62eb997016db1f716a32f11d4c02bf1cc227b3cbb16c518ac2e5800c4"} Feb 26 09:38:41 crc kubenswrapper[4741]: I0226 09:38:41.562185 4741 generic.go:334] "Generic (PLEG): container finished" podID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerID="90e486a62eb997016db1f716a32f11d4c02bf1cc227b3cbb16c518ac2e5800c4" exitCode=0 Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.186374 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.343788 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsff2\" (UniqueName: \"kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2\") pod \"fa275842-df45-4c10-ac4f-04828ad53ae9\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.344088 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content\") pod \"fa275842-df45-4c10-ac4f-04828ad53ae9\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.344173 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities\") pod \"fa275842-df45-4c10-ac4f-04828ad53ae9\" (UID: \"fa275842-df45-4c10-ac4f-04828ad53ae9\") " Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.350016 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities" (OuterVolumeSpecName: "utilities") pod "fa275842-df45-4c10-ac4f-04828ad53ae9" (UID: "fa275842-df45-4c10-ac4f-04828ad53ae9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.386596 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2" (OuterVolumeSpecName: "kube-api-access-jsff2") pod "fa275842-df45-4c10-ac4f-04828ad53ae9" (UID: "fa275842-df45-4c10-ac4f-04828ad53ae9"). InnerVolumeSpecName "kube-api-access-jsff2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.416513 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa275842-df45-4c10-ac4f-04828ad53ae9" (UID: "fa275842-df45-4c10-ac4f-04828ad53ae9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.449050 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsff2\" (UniqueName: \"kubernetes.io/projected/fa275842-df45-4c10-ac4f-04828ad53ae9-kube-api-access-jsff2\") on node \"crc\" DevicePath \"\"" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.449131 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.449149 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa275842-df45-4c10-ac4f-04828ad53ae9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.580185 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjbnp" event={"ID":"fa275842-df45-4c10-ac4f-04828ad53ae9","Type":"ContainerDied","Data":"8c083ee6e32717ae559635bff594a3b97f0cdfc8861c408309b668921e186fdc"} Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.580267 4741 scope.go:117] "RemoveContainer" containerID="90e486a62eb997016db1f716a32f11d4c02bf1cc227b3cbb16c518ac2e5800c4" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.580274 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjbnp" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.635697 4741 scope.go:117] "RemoveContainer" containerID="b968debb9654778a93edc13f00a9232cc9657a605999d8bf3e75b43ec057f22f" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.682753 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.693104 4741 scope.go:117] "RemoveContainer" containerID="e998f1baf528e0218b47bc9ec1347c17112c4ef0bda7f401ff6d357a3e8f2577" Feb 26 09:38:42 crc kubenswrapper[4741]: I0226 09:38:42.699804 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjbnp"] Feb 26 09:38:43 crc kubenswrapper[4741]: I0226 09:38:43.811805 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" path="/var/lib/kubelet/pods/fa275842-df45-4c10-ac4f-04828ad53ae9/volumes" Feb 26 09:38:51 crc kubenswrapper[4741]: I0226 09:38:51.787516 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:38:51 crc kubenswrapper[4741]: E0226 09:38:51.788397 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:39:05 crc kubenswrapper[4741]: I0226 09:39:05.570317 4741 scope.go:117] "RemoveContainer" containerID="89699ba4ff35ac9a669ca4fa5a22bd72412ee1f47018ff53d5f63149fc6091e3" Feb 26 09:39:05 crc kubenswrapper[4741]: I0226 09:39:05.812515 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:39:05 crc kubenswrapper[4741]: E0226 09:39:05.812918 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:39:16 crc kubenswrapper[4741]: I0226 09:39:16.787725 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:39:16 crc kubenswrapper[4741]: E0226 09:39:16.788634 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:39:28 crc kubenswrapper[4741]: I0226 09:39:28.794072 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:39:28 crc kubenswrapper[4741]: E0226 09:39:28.798303 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:39:40 crc kubenswrapper[4741]: I0226 09:39:40.799324 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:39:40 crc kubenswrapper[4741]: E0226 09:39:40.806568 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:39:54 crc kubenswrapper[4741]: I0226 09:39:54.799224 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:39:54 crc kubenswrapper[4741]: E0226 09:39:54.810599 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.571359 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534980-tw7d7"] Feb 26 09:40:01 crc kubenswrapper[4741]: E0226 09:40:01.594496 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="registry-server" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.595202 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="registry-server" Feb 26 09:40:01 crc kubenswrapper[4741]: E0226 09:40:01.598854 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="extract-content" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.598869 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="extract-content" Feb 26 09:40:01 crc kubenswrapper[4741]: E0226 09:40:01.598884 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="extract-utilities" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.598895 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="extract-utilities" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.601860 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa275842-df45-4c10-ac4f-04828ad53ae9" containerName="registry-server" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.646605 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.754472 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.754604 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.765154 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lh9k\" (UniqueName: \"kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k\") pod \"auto-csr-approver-29534980-tw7d7\" (UID: \"23965e77-0950-401f-9784-6a86250078e3\") " pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.842618 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:40:01 crc kubenswrapper[4741]: I0226 09:40:01.869163 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lh9k\" (UniqueName: \"kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k\") pod \"auto-csr-approver-29534980-tw7d7\" (UID: \"23965e77-0950-401f-9784-6a86250078e3\") " pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.082368 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.243575 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.243668 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.245425 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.245508 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875419 4741 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hn57j container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875715 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875767 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" podUID="4a4923cd-a652-4027-9945-5b20f94b0fff" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875781 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875842 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.875868 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.921746 4741 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:02 crc kubenswrapper[4741]: I0226 09:40:02.921929 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344362 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344390 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344445 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344480 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344493 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344523 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344573 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.344600 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.346352 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.346385 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.346377 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.346427 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.855283 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" podUID="b979c6a5-dfb5-43c3-8787-0d4e96bebd64" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:03 crc kubenswrapper[4741]: I0226 09:40:03.855284 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" podUID="b979c6a5-dfb5-43c3-8787-0d4e96bebd64" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:04 crc kubenswrapper[4741]: I0226 09:40:04.861521 4741 patch_prober.go:28] interesting pod/console-879b4584-zh2v7 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.145:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:04 crc kubenswrapper[4741]: I0226 09:40:04.868627 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-879b4584-zh2v7" podUID="6958fe99-167d-43b2-a0e2-e141e980f982" containerName="console" probeResult="failure" output="Get \"https://10.217.0.145:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:04 crc kubenswrapper[4741]: I0226 09:40:04.947571 4741 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:04 crc kubenswrapper[4741]: I0226 09:40:04.947647 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:04 crc kubenswrapper[4741]: I0226 09:40:04.977402 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podUID="f4754cdd-d402-4c7e-a0cf-a39549369eb8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.019368 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.159970 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.160001 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.160050 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.160065 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.228552 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podUID="aafef34e-4723-41d4-a28e-634f4ba80bea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.342460 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podUID="7d9bffe2-0600-47fe-83e6-847d6943a748" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.342651 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mt4jv" podUID="e2b04bf6-e1f3-48a6-9277-1a220a59ef82" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.479349 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podUID="e3fc347b-349b-4811-8f1e-0281658e669a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.520433 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podUID="ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.769459 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podUID="0d69cf5a-6ccc-4c66-a767-fd837ea440a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.810289 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.888411 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" podUID="10293970-cf7e-4d61-9522-0bbfaa7a872f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.964881 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:05 crc kubenswrapper[4741]: I0226 09:40:05.964915 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.077397 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.199364 4741 patch_prober.go:28] interesting pod/oauth-openshift-6fd87b5cc7-nr8cs container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.199437 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" podUID="42aebfcc-7921-46c7-a085-4bb8c46042f7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.199468 4741 patch_prober.go:28] interesting pod/oauth-openshift-6fd87b5cc7-nr8cs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.199510 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" podUID="42aebfcc-7921-46c7-a085-4bb8c46042f7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.230866 4741 patch_prober.go:28] interesting pod/route-controller-manager-5db6ccf457-gnnh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.230951 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" podUID="a45d3c75-9707-4363-8095-15c7702c3083" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.230972 4741 patch_prober.go:28] interesting pod/route-controller-manager-5db6ccf457-gnnh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.231078 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" podUID="a45d3c75-9707-4363-8095-15c7702c3083" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.421263 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podUID="001f4723-6a83-41ae-ac81-fc17c370a90e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.421365 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.463852 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" podUID="e569c05c-2b4a-448e-8393-65650cdc0d4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:06 crc kubenswrapper[4741]: I0226 09:40:06.753349 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podUID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.297371 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.297489 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.297538 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.297475 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382416 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" podUID="8520f5ec-d0e0-4bc0-a10b-dfb5157c5924" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382442 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382570 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382621 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382641 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382753 4741 patch_prober.go:28] interesting pod/thanos-querier-7bcdd678f4-p8jlf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382777 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" podUID="8878d1eb-ece5-4e57-aa4b-9997e84f5968" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.382814 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2lglc" podUID="8520f5ec-d0e0-4bc0-a10b-dfb5157c5924" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.464404 4741 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9vkjt container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.464492 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546337 4741 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7c8vj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546379 4741 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9vkjt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546419 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546469 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546539 4741 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7c8vj container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.546640 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.594541 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lh9k\" (UniqueName: \"kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k\") pod \"auto-csr-approver-29534980-tw7d7\" (UID: \"23965e77-0950-401f-9784-6a86250078e3\") " pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.767135 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:07 crc kubenswrapper[4741]: I0226 09:40:07.790056 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:40:07 crc kubenswrapper[4741]: E0226 09:40:07.793598 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.382531 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.382601 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.382627 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.382638 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.382693 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.383422 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.390869 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"e9e2234cad6411c7b9f2f9889538ac6627a912452bccb81c2dcecb2b1c86ea22"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.393610 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" containerID="cri-o://e9e2234cad6411c7b9f2f9889538ac6627a912452bccb81c2dcecb2b1c86ea22" gracePeriod=30 Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.659295 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" podUID="e374c69c-1959-44c3-839c-2b5897259440" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.128:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.659295 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" podUID="e374c69c-1959-44c3-839c-2b5897259440" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.814382 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-k969n" podUID="b979c6a5-dfb5-43c3-8787-0d4e96bebd64" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.935877 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tdzlz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.935980 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" podUID="dc5a16f1-f482-4a9f-81f0-b21fa200d4da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.936133 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tdzlz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:08 crc kubenswrapper[4741]: I0226 09:40:08.936209 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" podUID="dc5a16f1-f482-4a9f-81f0-b21fa200d4da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.392365 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.392443 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.707902 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.708532 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.711927 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:09 crc kubenswrapper[4741]: I0226 09:40:09.713044 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.558655 4741 trace.go:236] Trace[238346577]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (26-Feb-2026 09:40:08.157) (total time: 2392ms): Feb 26 09:40:10 crc kubenswrapper[4741]: Trace[238346577]: [2.392973421s] [2.392973421s] END Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.558670 4741 trace.go:236] Trace[155379772]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-58cgc" (26-Feb-2026 09:40:04.247) (total time: 6303ms): Feb 26 09:40:10 crc kubenswrapper[4741]: Trace[155379772]: [6.303836402s] [6.303836402s] END Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.690913 4741 patch_prober.go:28] interesting pod/metrics-server-565b5fc49-5lpkb container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.691013 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-565b5fc49-5lpkb" podUID="e03473fd-4571-48a7-8eb0-93beb64488e7" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.702660 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.702962 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.706484 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ad35d04e-1800-463f-8059-29fac13e2947" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.875370 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" podUID="bb123f4a-b95e-413e-8d1b-a5efc5cbacdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.965070 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:10 crc kubenswrapper[4741]: I0226 09:40:10.965246 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.068970 4741 patch_prober.go:28] interesting pod/monitoring-plugin-77bff4d7bd-72zxp container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.069289 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" podUID="c16d8877-c85a-45f3-b358-cacde9af090f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.160941 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.161032 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.342386 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" podUID="85855f0c-ab53-44f0-8f0d-ac0299c5fc24" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.342792 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-76f9f89dd9-x2csm" podUID="85855f0c-ab53-44f0-8f0d-ac0299c5fc24" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.432689 4741 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-tvh2j container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.447328 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvh2j" podUID="8c718e96-1e2d-41e8-beff-d68534e49add" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.552048 4741 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-8lpzw container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.552138 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-8lpzw" podUID="dfce00da-1ed0-4246-af75-e66c5aa1bd39" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.656022 4741 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-fvl8r container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.656089 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" podUID="091900f2-d6cc-4fbb-8b1b-f4216f868a9c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.703896 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-lc9nj" podUID="c06d2b98-49e9-4e2b-9b13-498c00d387a8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.860784 4741 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-kvhl6 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.860877 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" podUID="41e6c349-1fc0-4972-a080-55bb785a4bf7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.912346 4741 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-wjt7d container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:11 crc kubenswrapper[4741]: I0226 09:40:11.912437 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" podUID="4896602d-060e-4777-957f-ff83ce8e812f" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.072454 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" podUID="80b43fed-c72c-4b2b-8d4d-0a0b9044d61f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.072524 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" podUID="80b43fed-c72c-4b2b-8d4d-0a0b9044d61f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.214344 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.214780 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.214573 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.214892 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.274459 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.274518 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.274533 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.274559 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.277292 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.277319 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" podUID="6fe5145b-bbf9-47ac-b53e-1282479db87d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.335582 4741 patch_prober.go:28] interesting pod/thanos-querier-7bcdd678f4-p8jlf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.335707 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" podUID="8878d1eb-ece5-4e57-aa4b-9997e84f5968" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.360322 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.442361 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.442445 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.442490 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcrfb" podUID="6fe5145b-bbf9-47ac-b53e-1282479db87d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.442506 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.442917 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-8nx8x" podUID="468a5a70-08db-488d-9f31-f9835091c5ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.443237 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-8nx8x" podUID="468a5a70-08db-488d-9f31-f9835091c5ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.443296 4741 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptx5j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.443311 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ptx5j" podUID="59420a86-a033-4cbe-98bf-3ec780191ed6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.526561 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" podUID="4b189628-5343-4512-bf5d-1daf4abf4079" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.526675 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-76fc895699-z8llq" podUID="4b189628-5343-4512-bf5d-1daf4abf4079" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.654841 4741 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-fvl8r container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.654920 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" podUID="091900f2-d6cc-4fbb-8b1b-f4216f868a9c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.835402 4741 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hn57j container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.835973 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hn57j" podUID="4a4923cd-a652-4027-9945-5b20f94b0fff" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.852465 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.852583 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.852696 4741 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jb7x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.852721 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jb7x" podUID="c7a6edd5-0d0d-431a-9884-af988d7db265" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.859177 4741 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.859618 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5e9394a8-a585-40dc-8178-539b51408421" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.861476 4741 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-kvhl6 container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.861512 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" podUID="41e6c349-1fc0-4972-a080-55bb785a4bf7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.912677 4741 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-wjt7d container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.912783 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" podUID="4896602d-060e-4777-957f-ff83ce8e812f" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.920453 4741 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.920543 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.949429 4741 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.949499 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="88cf7afe-52dd-437f-9739-e4f112fef5e8" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.993771 4741 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:12 crc kubenswrapper[4741]: I0226 09:40:12.994186 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="be2b1208-3e86-448e-beeb-86c6d953097d" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.160609 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.160692 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.214153 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.214212 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.214233 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.214268 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.274457 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.274539 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.274926 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.275092 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344490 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344906 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344542 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344650 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.345044 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.345073 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344692 4741 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-62dzt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.345145 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-62dzt" podUID="e7c8235f-88c7-4d87-b1b5-9514cb07f9cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344594 4741 patch_prober.go:28] interesting pod/router-default-5444994796-nrk4h container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.345192 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-nrk4h" podUID="bf09eafa-6397-4fa3-b7f4-c56e66348f9a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.344673 4741 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-66p6t container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.345253 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-66p6t" podUID="ccea6218-4c8e-45dd-890f-5f9fd1806c99" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.433349 4741 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-llgbc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.433422 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" podUID="00bfb7f6-7024-4431-9fc6-f86f8ff5e363" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.433499 4741 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-llgbc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.433514 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-llgbc" podUID="00bfb7f6-7024-4431-9fc6-f86f8ff5e363" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697413 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-tdw95" podUID="4bf58d3b-55b2-408e-ab70-84e96ef92a64" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697605 4741 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-btgjx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697611 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tdw95" podUID="4bf58d3b-55b2-408e-ab70-84e96ef92a64" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697684 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" podUID="ca42622a-7a05-4d6d-a432-389fa771e319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697750 4741 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-btgjx container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.697784 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-btgjx" podUID="ca42622a-7a05-4d6d-a432-389fa771e319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.703871 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.704303 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.843344 4741 trace.go:236] Trace[905055645]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (26-Feb-2026 09:40:11.257) (total time: 2585ms): Feb 26 09:40:13 crc kubenswrapper[4741]: Trace[905055645]: [2.585502072s] [2.585502072s] END Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.858512 4741 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.858866 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="5e9394a8-a585-40dc-8178-539b51408421" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.948000 4741 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.948057 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="88cf7afe-52dd-437f-9739-e4f112fef5e8" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.993836 4741 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:13 crc kubenswrapper[4741]: I0226 09:40:13.993914 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="be2b1208-3e86-448e-beeb-86c6d953097d" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.135546 4741 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.135625 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.251359 4741 status_manager.go:875] "Failed to update status for pod" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23965e77-0950-401f-9784-6a86250078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T09:40:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T09:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T09:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [oc]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T09:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [oc]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-cli:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"oc\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T09:40:01Z\\\"}}\" for pod \"openshift-infra\"/\"auto-csr-approver-29534980-tw7d7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=7s\": context deadline exceeded" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.391868 4741 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-58cgc container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.69:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.391885 4741 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-58cgc container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.69:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.391944 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" podUID="aa5bbcf2-6f44-42fe-b99b-50d222ce35ba" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.69:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.391990 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-58cgc" podUID="aa5bbcf2-6f44-42fe-b99b-50d222ce35ba" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.69:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.706463 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-thrfc" podUID="2fcbc58b-4880-4c34-8d80-16c8be56db58" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.707796 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-thrfc" podUID="2fcbc58b-4880-4c34-8d80-16c8be56db58" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.857986 4741 patch_prober.go:28] interesting pod/console-879b4584-zh2v7 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.145:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.858068 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-879b4584-zh2v7" podUID="6958fe99-167d-43b2-a0e2-e141e980f982" containerName="console" probeResult="failure" output="Get \"https://10.217.0.145:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.998051 4741 generic.go:334] "Generic (PLEG): container finished" podID="b7349090-2a42-41d0-9bed-5624de634744" containerID="3ef2ddfcd2186585f7aab15753653eccaae62a29cd4a5cc6e2297d6d310d1110" exitCode=1 Feb 26 09:40:14 crc kubenswrapper[4741]: I0226 09:40:14.999867 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" event={"ID":"b7349090-2a42-41d0-9bed-5624de634744","Type":"ContainerDied","Data":"3ef2ddfcd2186585f7aab15753653eccaae62a29cd4a5cc6e2297d6d310d1110"} Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.010878 4741 scope.go:117] "RemoveContainer" containerID="3ef2ddfcd2186585f7aab15753653eccaae62a29cd4a5cc6e2297d6d310d1110" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.018282 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podUID="f4754cdd-d402-4c7e-a0cf-a39549369eb8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.018288 4741 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.018338 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rt588" podUID="f4754cdd-d402-4c7e-a0cf-a39549369eb8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.018399 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.101323 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.101807 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-s78b5" podUID="b2c3a19d-a170-476f-a589-e7cde492ac1d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.271412 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podUID="aafef34e-4723-41d4-a28e-634f4ba80bea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.271516 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mkmsh" podUID="aafef34e-4723-41d4-a28e-634f4ba80bea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467430 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" podUID="6e5158cf-c5d8-46e4-b433-20c6a410bf5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467708 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467494 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podUID="7d9bffe2-0600-47fe-83e6-847d6943a748" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467786 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467693 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wdfht" podUID="6e5158cf-c5d8-46e4-b433-20c6a410bf5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.467665 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-6bfw4" podUID="7d9bffe2-0600-47fe-83e6-847d6943a748" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.549334 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podUID="e3fc347b-349b-4811-8f1e-0281658e669a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.632333 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podUID="ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.632492 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9b4f4" podUID="e3fc347b-349b-4811-8f1e-0281658e669a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.632568 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-b4tjj" podUID="ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710770 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-2vcnd" podUID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710858 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-mscg4" podUID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710891 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-mscg4" podUID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710695 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ad35d04e-1800-463f-8059-29fac13e2947" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710914 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-2vcnd" podUID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710914 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" podUID="234558da-0d49-4f9f-aec9-7d5a8e63cfef" containerName="sbdb" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.710987 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7mqq" podUID="234558da-0d49-4f9f-aec9-7d5a8e63cfef" containerName="nbdb" probeResult="failure" output="command timed out" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.832774 4741 trace.go:236] Trace[1622418970]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (26-Feb-2026 09:40:14.639) (total time: 1193ms): Feb 26 09:40:15 crc kubenswrapper[4741]: Trace[1622418970]: [1.193393879s] [1.193393879s] END Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.874347 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" podUID="76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.964760 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.964760 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="24120f9b-9d9b-4783-9dd9-2450215d3d26" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.164:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:15 crc kubenswrapper[4741]: I0226 09:40:15.965369 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.040306 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.219366 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" podUID="76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.219796 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" podUID="3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.220302 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" podUID="c40047b0-d115-4a5f-aa50-d888eafff094" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.220477 4741 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-ll5hz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.220726 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" podUID="dca8318b-c85c-42b6-a540-fc16d675a3f4" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.220833 4741 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-ll5hz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.220903 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ll5hz" podUID="dca8318b-c85c-42b6-a540-fc16d675a3f4" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.221408 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k2c7v" podUID="c40047b0-d115-4a5f-aa50-d888eafff094" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.303289 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" podUID="10293970-cf7e-4d61-9522-0bbfaa7a872f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.303671 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podUID="0d69cf5a-6ccc-4c66-a767-fd837ea440a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.303776 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" podUID="e97b1690-b880-4c0d-9e36-484d2abf0e8e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.303768 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-d7flk" podUID="0d69cf5a-6ccc-4c66-a767-fd837ea440a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.386278 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" podUID="6980cc82-375e-4057-8dd6-1518d19891ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.386677 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.386822 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d672t" podUID="e97b1690-b880-4c0d-9e36-484d2abf0e8e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.387797 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-tc4z9" podUID="3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.387913 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388080 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.387912 4741 patch_prober.go:28] interesting pod/oauth-openshift-6fd87b5cc7-nr8cs container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388309 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" podUID="42aebfcc-7921-46c7-a085-4bb8c46042f7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.387941 4741 patch_prober.go:28] interesting pod/oauth-openshift-6fd87b5cc7-nr8cs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388356 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6fd87b5cc7-nr8cs" podUID="42aebfcc-7921-46c7-a085-4bb8c46042f7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388630 4741 patch_prober.go:28] interesting pod/controller-manager-66875b6555-b4bl8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388693 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" podUID="65df98f3-85ae-481f-9dd5-8c0ff79bb7b8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388720 4741 patch_prober.go:28] interesting pod/controller-manager-66875b6555-b4bl8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388784 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-66875b6555-b4bl8" podUID="65df98f3-85ae-481f-9dd5-8c0ff79bb7b8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388857 4741 patch_prober.go:28] interesting pod/route-controller-manager-5db6ccf457-gnnh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388895 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" podUID="a45d3c75-9707-4363-8095-15c7702c3083" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388929 4741 patch_prober.go:28] interesting pod/route-controller-manager-5db6ccf457-gnnh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388943 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5db6ccf457-gnnh8" podUID="a45d3c75-9707-4363-8095-15c7702c3083" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.388983 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zhvxr" podUID="10293970-cf7e-4d61-9522-0bbfaa7a872f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.472402 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.472515 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.554298 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podUID="001f4723-6a83-41ae-ac81-fc17c370a90e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.636500 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" podUID="e569c05c-2b4a-448e-8393-65650cdc0d4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.636630 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7c7nz" podUID="dbdb4143-6ca6-4468-ae59-db0a15ae9229" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.636659 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5854c6b474-xr2dz" podUID="001f4723-6a83-41ae-ac81-fc17c370a90e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.636817 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" podUID="e569c05c-2b4a-448e-8393-65650cdc0d4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.704232 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-lc9nj" podUID="c06d2b98-49e9-4e2b-9b13-498c00d387a8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.794457 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podUID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:16 crc kubenswrapper[4741]: I0226 09:40:16.794925 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podUID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.213260 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.213410 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.213726 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.213791 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.273757 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.273851 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.273859 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.273962 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.335565 4741 patch_prober.go:28] interesting pod/thanos-querier-7bcdd678f4-p8jlf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.335642 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7bcdd678f4-p8jlf" podUID="8878d1eb-ece5-4e57-aa4b-9997e84f5968" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.440361 4741 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9vkjt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.440387 4741 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9vkjt container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.440444 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.440477 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9vkjt" podUID="480c4db0-8b7a-4ef8-a2e6-c7289a9f21af" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.481597 4741 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7c8vj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:17 crc kubenswrapper[4741]: I0226 09:40:17.481709 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7c8vj" podUID="90d97168-5e93-4e51-b66e-d35fc864211d" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.045683 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" event={"ID":"b7349090-2a42-41d0-9bed-5624de634744","Type":"ContainerStarted","Data":"91ee608da50e84863c3d528ed83b42ddd2ea2d40c0ed10688059d6b319a2982a"} Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.046301 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.441224 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.615402 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-79d8d89fdf-5jkv5" podUID="e374c69c-1959-44c3-839c-2b5897259440" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.128:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.708083 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-fzxmn" podUID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.708147 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-28q5n" podUID="c71842fc-fda8-481f-96d6-64b811178a92" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.708083 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fzxmn" podUID="4ddcb17f-6b4a-4194-aab9-e24dc49c75e0" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.708784 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-28q5n" podUID="c71842fc-fda8-481f-96d6-64b811178a92" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.916304 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tdzlz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.916693 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" podUID="dc5a16f1-f482-4a9f-81f0-b21fa200d4da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.916362 4741 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tdzlz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:18 crc kubenswrapper[4741]: I0226 09:40:18.916801 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-tdzlz" podUID="dc5a16f1-f482-4a9f-81f0-b21fa200d4da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.160694 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.160752 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.703578 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.705244 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="ed8ae863-261b-4cbd-945a-b79c99fa0a9f" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.706078 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.706179 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="816448f3-dfc3-4045-834a-c82c2a4e0589" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 09:40:19 crc kubenswrapper[4741]: I0226 09:40:19.706322 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.073700 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" event={"ID":"6980cc82-375e-4057-8dd6-1518d19891ed","Type":"ContainerDied","Data":"65118c356c3984d61c9c8be66ea9c9016b660c8a5a6d13c30e75a8496698d159"} Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.074615 4741 generic.go:334] "Generic (PLEG): container finished" podID="6980cc82-375e-4057-8dd6-1518d19891ed" containerID="65118c356c3984d61c9c8be66ea9c9016b660c8a5a6d13c30e75a8496698d159" exitCode=1 Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.075816 4741 scope.go:117] "RemoveContainer" containerID="65118c356c3984d61c9c8be66ea9c9016b660c8a5a6d13c30e75a8496698d159" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.080263 4741 generic.go:334] "Generic (PLEG): container finished" podID="80b43fed-c72c-4b2b-8d4d-0a0b9044d61f" containerID="0b9b18c2d08770989e810e4ef07d1cf318f0aac4a9b954e4ecf5ae9506da6f29" exitCode=1 Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.080343 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" event={"ID":"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f","Type":"ContainerDied","Data":"0b9b18c2d08770989e810e4ef07d1cf318f0aac4a9b954e4ecf5ae9506da6f29"} Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.081692 4741 scope.go:117] "RemoveContainer" containerID="0b9b18c2d08770989e810e4ef07d1cf318f0aac4a9b954e4ecf5ae9506da6f29" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.083881 4741 generic.go:334] "Generic (PLEG): container finished" podID="76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe" containerID="1fff9b9a5530bf760707fdd0067cfdfeb6b72d9e4004a8208f1a299c1a2eb105" exitCode=1 Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.083944 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" event={"ID":"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe","Type":"ContainerDied","Data":"1fff9b9a5530bf760707fdd0067cfdfeb6b72d9e4004a8208f1a299c1a2eb105"} Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.087735 4741 scope.go:117] "RemoveContainer" containerID="1fff9b9a5530bf760707fdd0067cfdfeb6b72d9e4004a8208f1a299c1a2eb105" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.703603 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.703622 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="2b1496b8-9f14-472d-af02-7357f75ba7cf" containerName="galera" probeResult="failure" output="command timed out" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.865830 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534980-tw7d7"] Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.877398 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" podUID="bb123f4a-b95e-413e-8d1b-a5efc5cbacdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:20 crc kubenswrapper[4741]: I0226 09:40:20.990055 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.068413 4741 patch_prober.go:28] interesting pod/monitoring-plugin-77bff4d7bd-72zxp container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.068477 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-77bff4d7bd-72zxp" podUID="c16d8877-c85a-45f3-b358-cacde9af090f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.100686 4741 generic.go:334] "Generic (PLEG): container finished" podID="e569c05c-2b4a-448e-8393-65650cdc0d4a" containerID="90ab64f55ed6f4600a1386ffab9a2cf28cda36fd6ff64c125017056eb021afd6" exitCode=1 Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.100783 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" event={"ID":"e569c05c-2b4a-448e-8393-65650cdc0d4a","Type":"ContainerDied","Data":"90ab64f55ed6f4600a1386ffab9a2cf28cda36fd6ff64c125017056eb021afd6"} Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.102371 4741 scope.go:117] "RemoveContainer" containerID="90ab64f55ed6f4600a1386ffab9a2cf28cda36fd6ff64c125017056eb021afd6" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.105199 4741 generic.go:334] "Generic (PLEG): container finished" podID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerID="e9e2234cad6411c7b9f2f9889538ac6627a912452bccb81c2dcecb2b1c86ea22" exitCode=0 Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.105238 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" event={"ID":"1ceb1ab9-9ce4-4a40-9273-727f0499aa21","Type":"ContainerDied","Data":"e9e2234cad6411c7b9f2f9889538ac6627a912452bccb81c2dcecb2b1c86ea22"} Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.431853 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.659292 4741 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-fvl8r container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.659605 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-fvl8r" podUID="091900f2-d6cc-4fbb-8b1b-f4216f868a9c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.710926 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ad35d04e-1800-463f-8059-29fac13e2947" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.711026 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.720793 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"76b854d55d6853c7e3a5cd4f44c010876da4ba6844183ab7bb0cba44784eec43"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.735226 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad35d04e-1800-463f-8059-29fac13e2947" containerName="ceilometer-central-agent" containerID="cri-o://76b854d55d6853c7e3a5cd4f44c010876da4ba6844183ab7bb0cba44784eec43" gracePeriod=30 Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.861898 4741 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-kvhl6 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.861953 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-kvhl6" podUID="41e6c349-1fc0-4972-a080-55bb785a4bf7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.925751 4741 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-wjt7d container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:21 crc kubenswrapper[4741]: I0226 09:40:21.926088 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wjt7d" podUID="4896602d-060e-4777-957f-ff83ce8e812f" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.083445 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.083530 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-f4pcr" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.085549 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"b63f7ba609d315bacc63e0980c17a018b3233a7e243ce73658ffbae10a9f3415"} pod="metallb-system/frr-k8s-f4pcr" containerMessage="Container frr failed liveness probe, will be restarted" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.085674 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-f4pcr" podUID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerName="frr" containerID="cri-o://b63f7ba609d315bacc63e0980c17a018b3233a7e243ce73658ffbae10a9f3415" gracePeriod=2 Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.166066 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.166157 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.172343 4741 generic.go:334] "Generic (PLEG): container finished" podID="6c09faf7-6a12-4474-8251-2aa222e9c596" containerID="97bb0d2d64b32fa754aad0396198d300b5bc0a355d96fc8b760e837491b42296" exitCode=1 Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.172468 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" event={"ID":"6c09faf7-6a12-4474-8251-2aa222e9c596","Type":"ContainerDied","Data":"97bb0d2d64b32fa754aad0396198d300b5bc0a355d96fc8b760e837491b42296"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.174691 4741 scope.go:117] "RemoveContainer" containerID="97bb0d2d64b32fa754aad0396198d300b5bc0a355d96fc8b760e837491b42296" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.193180 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" event={"ID":"80b43fed-c72c-4b2b-8d4d-0a0b9044d61f","Type":"ContainerStarted","Data":"01ac6f4fa55df59414c2d4d158bdecc3a5bc9e9d59600bdc90e2808edb084a5b"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.195500 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.205584 4741 generic.go:334] "Generic (PLEG): container finished" podID="bb123f4a-b95e-413e-8d1b-a5efc5cbacdd" containerID="8901fcd05aa6e6c22d4274e4ab37a8affbe4ae042c6602049f7ecf414d5cb2c1" exitCode=1 Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.205694 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" event={"ID":"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd","Type":"ContainerDied","Data":"8901fcd05aa6e6c22d4274e4ab37a8affbe4ae042c6602049f7ecf414d5cb2c1"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.207306 4741 scope.go:117] "RemoveContainer" containerID="8901fcd05aa6e6c22d4274e4ab37a8affbe4ae042c6602049f7ecf414d5cb2c1" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.214341 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-jqlhm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.214478 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-jqlhm" podUID="b029b8c8-35eb-4509-a29a-9ada4434b899" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.270585 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" event={"ID":"76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe","Type":"ContainerStarted","Data":"233029e8bed243c9f018e9a47f37ab76df51236614f783befecfc8db526b490b"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.274078 4741 patch_prober.go:28] interesting pod/logging-loki-gateway-7bbb966984-qjtwt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.274137 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7bbb966984-qjtwt" podUID="aad6cae3-3b9d-4d9e-8549-55da6e10901d" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.275564 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.302024 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" event={"ID":"6980cc82-375e-4057-8dd6-1518d19891ed","Type":"ContainerStarted","Data":"589e7ba0779e31e13f8cdcfb9f64086f9dc2c4133e6219d88a2d3c767fdaa0b4"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.302683 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.313190 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" event={"ID":"e569c05c-2b4a-448e-8393-65650cdc0d4a","Type":"ContainerStarted","Data":"bf160e02e56b864de319d1543661c08c4ea1c56ad64f2c0246ff72c8ab26948e"} Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.314450 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.536256 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-mscg4" podUID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerName="registry-server" probeResult="failure" output=< Feb 26 09:40:22 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:40:22 crc kubenswrapper[4741]: > Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.536747 4741 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6c89769cfb-mbqvs container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/readyz\": dial tcp 10.217.0.49:8081: connect: connection refused" start-of-body= Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.536780 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" podUID="b7349090-2a42-41d0-9bed-5624de634744" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/readyz\": dial tcp 10.217.0.49:8081: connect: connection refused" Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.537297 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-mscg4" podUID="cfb44914-3bd9-4c8c-937b-cccc55045fc6" containerName="registry-server" probeResult="failure" output=< Feb 26 09:40:22 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:40:22 crc kubenswrapper[4741]: > Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.539751 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-2vcnd" podUID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerName="registry-server" probeResult="failure" output=< Feb 26 09:40:22 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:40:22 crc kubenswrapper[4741]: > Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.539835 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-2vcnd" podUID="93b4b5c9-a048-4219-86a9-ef1ff11cc024" containerName="registry-server" probeResult="failure" output=< Feb 26 09:40:22 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:40:22 crc kubenswrapper[4741]: > Feb 26 09:40:22 crc kubenswrapper[4741]: I0226 09:40:22.787703 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:40:22 crc kubenswrapper[4741]: E0226 09:40:22.789459 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.354399 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rlrx" event={"ID":"6c09faf7-6a12-4474-8251-2aa222e9c596","Type":"ContainerStarted","Data":"f4db428093cb8aec64a10f2ca0943186d96a979bffe6cb748f42111d37ad3012"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.366248 4741 generic.go:334] "Generic (PLEG): container finished" podID="a2705136-6518-4339-b135-2d6f71d0fe6b" containerID="b63f7ba609d315bacc63e0980c17a018b3233a7e243ce73658ffbae10a9f3415" exitCode=143 Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.366354 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerDied","Data":"b63f7ba609d315bacc63e0980c17a018b3233a7e243ce73658ffbae10a9f3415"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.366453 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4pcr" event={"ID":"a2705136-6518-4339-b135-2d6f71d0fe6b","Type":"ContainerStarted","Data":"99172a4847d1273aef8b8a49aea722bff7215ed89d4473863b268f84fa2a1e91"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.369618 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" event={"ID":"bb123f4a-b95e-413e-8d1b-a5efc5cbacdd","Type":"ContainerStarted","Data":"e9b9c1ce8ea7662972b8fcea8b41d4a0f340afab0ce7c66c7d59d9bc8204cadd"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.369919 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.371595 4741 generic.go:334] "Generic (PLEG): container finished" podID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerID="15eb2fb815bf5e120908cc628eaef45b9011ef5da10144034d4b2aa922072da5" exitCode=1 Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.371642 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" event={"ID":"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed","Type":"ContainerDied","Data":"15eb2fb815bf5e120908cc628eaef45b9011ef5da10144034d4b2aa922072da5"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.373482 4741 scope.go:117] "RemoveContainer" containerID="15eb2fb815bf5e120908cc628eaef45b9011ef5da10144034d4b2aa922072da5" Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.376409 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" event={"ID":"1ceb1ab9-9ce4-4a40-9273-727f0499aa21","Type":"ContainerStarted","Data":"96636c73dddc7d2bd82ec2f046c15d76961275609fe79a2040ed7d1545cfc048"} Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.377246 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.377297 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:23 crc kubenswrapper[4741]: I0226 09:40:23.954183 4741 trace.go:236] Trace[1792392739]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (26-Feb-2026 09:40:22.728) (total time: 1221ms): Feb 26 09:40:23 crc kubenswrapper[4741]: Trace[1792392739]: [1.221112655s] [1.221112655s] END Feb 26 09:40:24 crc kubenswrapper[4741]: I0226 09:40:24.391538 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" event={"ID":"3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed","Type":"ContainerStarted","Data":"aa35809c0dedea039056a5714566165c4cc7d0d9aedcdaa441700905cd8c99b8"} Feb 26 09:40:24 crc kubenswrapper[4741]: I0226 09:40:24.392202 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 09:40:24 crc kubenswrapper[4741]: I0226 09:40:24.393098 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.160047 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.160048 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.160125 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.160181 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.403333 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.403757 4741 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9zw77 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:40:25 crc kubenswrapper[4741]: I0226 09:40:25.403864 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" podUID="1ceb1ab9-9ce4-4a40-9273-727f0499aa21" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:40:26 crc kubenswrapper[4741]: I0226 09:40:26.034779 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f4pcr" Feb 26 09:40:26 crc kubenswrapper[4741]: I0226 09:40:26.742590 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f4pcr" Feb 26 09:40:27 crc kubenswrapper[4741]: I0226 09:40:27.238327 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534980-tw7d7"] Feb 26 09:40:27 crc kubenswrapper[4741]: W0226 09:40:27.400562 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23965e77_0950_401f_9784_6a86250078e3.slice/crio-4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a WatchSource:0}: Error finding container 4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a: Status 404 returned error can't find the container with id 4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a Feb 26 09:40:27 crc kubenswrapper[4741]: I0226 09:40:27.440091 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" event={"ID":"23965e77-0950-401f-9784-6a86250078e3","Type":"ContainerStarted","Data":"4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a"} Feb 26 09:40:28 crc kubenswrapper[4741]: I0226 09:40:28.218293 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9zw77" Feb 26 09:40:30 crc kubenswrapper[4741]: I0226 09:40:30.533660 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" event={"ID":"23965e77-0950-401f-9784-6a86250078e3","Type":"ContainerStarted","Data":"782afc8309269b54f954c8d937a774031a1f5cb42071e7b9b5bb2a4abb8d6ecc"} Feb 26 09:40:30 crc kubenswrapper[4741]: I0226 09:40:30.582253 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" podStartSLOduration=29.491820871 podStartE2EDuration="30.573826805s" podCreationTimestamp="2026-02-26 09:40:00 +0000 UTC" firstStartedPulling="2026-02-26 09:40:27.40828202 +0000 UTC m=+5262.404219397" lastFinishedPulling="2026-02-26 09:40:28.490287944 +0000 UTC m=+5263.486225331" observedRunningTime="2026-02-26 09:40:30.569959235 +0000 UTC m=+5265.565896622" watchObservedRunningTime="2026-02-26 09:40:30.573826805 +0000 UTC m=+5265.569764182" Feb 26 09:40:31 crc kubenswrapper[4741]: I0226 09:40:31.016379 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z" Feb 26 09:40:32 crc kubenswrapper[4741]: I0226 09:40:32.541958 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6c89769cfb-mbqvs" Feb 26 09:40:32 crc kubenswrapper[4741]: I0226 09:40:32.558153 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" event={"ID":"23965e77-0950-401f-9784-6a86250078e3","Type":"ContainerDied","Data":"782afc8309269b54f954c8d937a774031a1f5cb42071e7b9b5bb2a4abb8d6ecc"} Feb 26 09:40:32 crc kubenswrapper[4741]: I0226 09:40:32.559185 4741 generic.go:334] "Generic (PLEG): container finished" podID="23965e77-0950-401f-9784-6a86250078e3" containerID="782afc8309269b54f954c8d937a774031a1f5cb42071e7b9b5bb2a4abb8d6ecc" exitCode=0 Feb 26 09:40:34 crc kubenswrapper[4741]: I0226 09:40:34.759423 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5tj5s" Feb 26 09:40:34 crc kubenswrapper[4741]: I0226 09:40:34.835284 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z8h9r" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.411638 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zf778" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.523250 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.572218 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lh9k\" (UniqueName: \"kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k\") pod \"23965e77-0950-401f-9784-6a86250078e3\" (UID: \"23965e77-0950-401f-9784-6a86250078e3\") " Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.661365 4741 generic.go:334] "Generic (PLEG): container finished" podID="ad35d04e-1800-463f-8059-29fac13e2947" containerID="76b854d55d6853c7e3a5cd4f44c010876da4ba6844183ab7bb0cba44784eec43" exitCode=0 Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.661425 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerDied","Data":"76b854d55d6853c7e3a5cd4f44c010876da4ba6844183ab7bb0cba44784eec43"} Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.666102 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" event={"ID":"23965e77-0950-401f-9784-6a86250078e3","Type":"ContainerDied","Data":"4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a"} Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.666389 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534980-tw7d7" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.668241 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c01e0efbb9349faffc51a3269b10da1d6d5296c0c320a7236b39efcca5cff5a" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.688546 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k" (OuterVolumeSpecName: "kube-api-access-2lh9k") pod "23965e77-0950-401f-9784-6a86250078e3" (UID: "23965e77-0950-401f-9784-6a86250078e3"). InnerVolumeSpecName "kube-api-access-2lh9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.748781 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.784230 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lh9k\" (UniqueName: \"kubernetes.io/projected/23965e77-0950-401f-9784-6a86250078e3-kube-api-access-2lh9k\") on node \"crc\" DevicePath \"\"" Feb 26 09:40:35 crc kubenswrapper[4741]: I0226 09:40:35.839303 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:40:35 crc kubenswrapper[4741]: E0226 09:40:35.840383 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:40:36 crc kubenswrapper[4741]: I0226 09:40:36.630505 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534974-hwz24"] Feb 26 09:40:36 crc kubenswrapper[4741]: I0226 09:40:36.641874 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534974-hwz24"] Feb 26 09:40:36 crc kubenswrapper[4741]: I0226 09:40:36.683219 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad35d04e-1800-463f-8059-29fac13e2947","Type":"ContainerStarted","Data":"bf9642d819a99fc473063632beb52a549470cd15e3cf65aa8c9f07e7a7b67fbc"} Feb 26 09:40:37 crc kubenswrapper[4741]: I0226 09:40:37.804165 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c6c2c7-8db9-429c-994b-74d9ded9fcd8" path="/var/lib/kubelet/pods/d7c6c2c7-8db9-429c-994b-74d9ded9fcd8/volumes" Feb 26 09:40:49 crc kubenswrapper[4741]: I0226 09:40:49.794684 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:40:49 crc kubenswrapper[4741]: E0226 09:40:49.804023 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:40:59 crc kubenswrapper[4741]: I0226 09:40:59.924247 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64545648d6-pt5sq" Feb 26 09:41:04 crc kubenswrapper[4741]: I0226 09:41:04.789949 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:41:04 crc kubenswrapper[4741]: E0226 09:41:04.793877 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:41:06 crc kubenswrapper[4741]: I0226 09:41:06.170576 4741 scope.go:117] "RemoveContainer" containerID="de46e9ca1aae1d279cb4267e58405dce9481649bd64e5e3c8d8ce6706958d300" Feb 26 09:41:17 crc kubenswrapper[4741]: I0226 09:41:17.788355 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:41:17 crc kubenswrapper[4741]: E0226 09:41:17.789456 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:41:32 crc kubenswrapper[4741]: I0226 09:41:32.788667 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:41:32 crc kubenswrapper[4741]: E0226 09:41:32.789627 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:41:44 crc kubenswrapper[4741]: I0226 09:41:44.787889 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:41:44 crc kubenswrapper[4741]: E0226 09:41:44.788904 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:41:55 crc kubenswrapper[4741]: I0226 09:41:55.800755 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:41:55 crc kubenswrapper[4741]: E0226 09:41:55.802355 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.553867 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534982-zxrwr"] Feb 26 09:42:00 crc kubenswrapper[4741]: E0226 09:42:00.562131 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23965e77-0950-401f-9784-6a86250078e3" containerName="oc" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.562174 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="23965e77-0950-401f-9784-6a86250078e3" containerName="oc" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.564382 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="23965e77-0950-401f-9784-6a86250078e3" containerName="oc" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.574236 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.589511 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.589523 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.589608 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.657389 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534982-zxrwr"] Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.670293 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9frq\" (UniqueName: \"kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq\") pod \"auto-csr-approver-29534982-zxrwr\" (UID: \"12005334-faff-4e67-b8ad-b2ae50d7e79f\") " pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.772978 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9frq\" (UniqueName: \"kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq\") pod \"auto-csr-approver-29534982-zxrwr\" (UID: \"12005334-faff-4e67-b8ad-b2ae50d7e79f\") " pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.821086 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9frq\" (UniqueName: \"kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq\") pod \"auto-csr-approver-29534982-zxrwr\" (UID: \"12005334-faff-4e67-b8ad-b2ae50d7e79f\") " pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:00 crc kubenswrapper[4741]: I0226 09:42:00.917406 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:02 crc kubenswrapper[4741]: I0226 09:42:02.506102 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534982-zxrwr"] Feb 26 09:42:02 crc kubenswrapper[4741]: I0226 09:42:02.823832 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" event={"ID":"12005334-faff-4e67-b8ad-b2ae50d7e79f","Type":"ContainerStarted","Data":"d434a157362a2e67fd3276c0732c52921f2b0b9a244bd75446949b4b9b4d6174"} Feb 26 09:42:04 crc kubenswrapper[4741]: I0226 09:42:04.853578 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" event={"ID":"12005334-faff-4e67-b8ad-b2ae50d7e79f","Type":"ContainerStarted","Data":"dda150a0cbe7ad63ca50eee6bfc2ac737ad8078aeeb820c2c96d65433e410d15"} Feb 26 09:42:04 crc kubenswrapper[4741]: I0226 09:42:04.882432 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" podStartSLOduration=3.962292714 podStartE2EDuration="4.879572167s" podCreationTimestamp="2026-02-26 09:42:00 +0000 UTC" firstStartedPulling="2026-02-26 09:42:02.545737455 +0000 UTC m=+5357.541674842" lastFinishedPulling="2026-02-26 09:42:03.463016908 +0000 UTC m=+5358.458954295" observedRunningTime="2026-02-26 09:42:04.870681834 +0000 UTC m=+5359.866619221" watchObservedRunningTime="2026-02-26 09:42:04.879572167 +0000 UTC m=+5359.875509544" Feb 26 09:42:05 crc kubenswrapper[4741]: I0226 09:42:05.869872 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" event={"ID":"12005334-faff-4e67-b8ad-b2ae50d7e79f","Type":"ContainerDied","Data":"dda150a0cbe7ad63ca50eee6bfc2ac737ad8078aeeb820c2c96d65433e410d15"} Feb 26 09:42:05 crc kubenswrapper[4741]: I0226 09:42:05.871303 4741 generic.go:334] "Generic (PLEG): container finished" podID="12005334-faff-4e67-b8ad-b2ae50d7e79f" containerID="dda150a0cbe7ad63ca50eee6bfc2ac737ad8078aeeb820c2c96d65433e410d15" exitCode=0 Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.525764 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.582965 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9frq\" (UniqueName: \"kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq\") pod \"12005334-faff-4e67-b8ad-b2ae50d7e79f\" (UID: \"12005334-faff-4e67-b8ad-b2ae50d7e79f\") " Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.601587 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq" (OuterVolumeSpecName: "kube-api-access-m9frq") pod "12005334-faff-4e67-b8ad-b2ae50d7e79f" (UID: "12005334-faff-4e67-b8ad-b2ae50d7e79f"). InnerVolumeSpecName "kube-api-access-m9frq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.688597 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9frq\" (UniqueName: \"kubernetes.io/projected/12005334-faff-4e67-b8ad-b2ae50d7e79f-kube-api-access-m9frq\") on node \"crc\" DevicePath \"\"" Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.902556 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" event={"ID":"12005334-faff-4e67-b8ad-b2ae50d7e79f","Type":"ContainerDied","Data":"d434a157362a2e67fd3276c0732c52921f2b0b9a244bd75446949b4b9b4d6174"} Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.902653 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534982-zxrwr" Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.903539 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d434a157362a2e67fd3276c0732c52921f2b0b9a244bd75446949b4b9b4d6174" Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.975708 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534976-p8lnt"] Feb 26 09:42:07 crc kubenswrapper[4741]: I0226 09:42:07.987643 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534976-p8lnt"] Feb 26 09:42:09 crc kubenswrapper[4741]: I0226 09:42:09.807274 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c470433-6812-4042-b1e1-f7b1d3aa4696" path="/var/lib/kubelet/pods/3c470433-6812-4042-b1e1-f7b1d3aa4696/volumes" Feb 26 09:42:10 crc kubenswrapper[4741]: I0226 09:42:10.787879 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:42:10 crc kubenswrapper[4741]: E0226 09:42:10.788668 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:42:25 crc kubenswrapper[4741]: I0226 09:42:25.812525 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:42:25 crc kubenswrapper[4741]: E0226 09:42:25.813941 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:42:37 crc kubenswrapper[4741]: I0226 09:42:37.788026 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:42:37 crc kubenswrapper[4741]: E0226 09:42:37.789901 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.247098 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:42:46 crc kubenswrapper[4741]: E0226 09:42:46.251419 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12005334-faff-4e67-b8ad-b2ae50d7e79f" containerName="oc" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.251456 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="12005334-faff-4e67-b8ad-b2ae50d7e79f" containerName="oc" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.253233 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="12005334-faff-4e67-b8ad-b2ae50d7e79f" containerName="oc" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.257227 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.270320 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.320005 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q68j\" (UniqueName: \"kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.320807 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.321465 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.424820 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.425228 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.425476 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q68j\" (UniqueName: \"kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.426309 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.426600 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.458146 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q68j\" (UniqueName: \"kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j\") pod \"community-operators-wlfsd\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:46 crc kubenswrapper[4741]: I0226 09:42:46.585787 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:47 crc kubenswrapper[4741]: I0226 09:42:47.275621 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:42:47 crc kubenswrapper[4741]: W0226 09:42:47.513195 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80657694_4c16_478f_8950_fb1d128fd19d.slice/crio-fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734 WatchSource:0}: Error finding container fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734: Status 404 returned error can't find the container with id fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734 Feb 26 09:42:48 crc kubenswrapper[4741]: I0226 09:42:48.391741 4741 generic.go:334] "Generic (PLEG): container finished" podID="80657694-4c16-478f-8950-fb1d128fd19d" containerID="dd2c5f54191d2719803d50f0a7d6bdc55dd9e248bcf5e1907ae29a6859e68e90" exitCode=0 Feb 26 09:42:48 crc kubenswrapper[4741]: I0226 09:42:48.391850 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerDied","Data":"dd2c5f54191d2719803d50f0a7d6bdc55dd9e248bcf5e1907ae29a6859e68e90"} Feb 26 09:42:48 crc kubenswrapper[4741]: I0226 09:42:48.392171 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerStarted","Data":"fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734"} Feb 26 09:42:50 crc kubenswrapper[4741]: I0226 09:42:50.424422 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerStarted","Data":"f334c311ef65c8b681412f16585dd424b88e1bafc5f256b3c4790b937bb8bff2"} Feb 26 09:42:51 crc kubenswrapper[4741]: I0226 09:42:51.788608 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:42:51 crc kubenswrapper[4741]: E0226 09:42:51.789636 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:42:52 crc kubenswrapper[4741]: I0226 09:42:52.465287 4741 generic.go:334] "Generic (PLEG): container finished" podID="80657694-4c16-478f-8950-fb1d128fd19d" containerID="f334c311ef65c8b681412f16585dd424b88e1bafc5f256b3c4790b937bb8bff2" exitCode=0 Feb 26 09:42:52 crc kubenswrapper[4741]: I0226 09:42:52.465816 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerDied","Data":"f334c311ef65c8b681412f16585dd424b88e1bafc5f256b3c4790b937bb8bff2"} Feb 26 09:42:53 crc kubenswrapper[4741]: I0226 09:42:53.482550 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerStarted","Data":"4a930be1a6dd585b45baf3f4a37eff86faa6222d7fee2dd47506fa876d8c9799"} Feb 26 09:42:56 crc kubenswrapper[4741]: I0226 09:42:56.587165 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:56 crc kubenswrapper[4741]: I0226 09:42:56.587756 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:42:57 crc kubenswrapper[4741]: I0226 09:42:57.640550 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlfsd" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="registry-server" probeResult="failure" output=< Feb 26 09:42:57 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:42:57 crc kubenswrapper[4741]: > Feb 26 09:43:04 crc kubenswrapper[4741]: I0226 09:43:04.787411 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:43:04 crc kubenswrapper[4741]: E0226 09:43:04.788090 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:43:06 crc kubenswrapper[4741]: I0226 09:43:06.659593 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:43:06 crc kubenswrapper[4741]: I0226 09:43:06.685613 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlfsd" podStartSLOduration=16.157012575 podStartE2EDuration="20.685581247s" podCreationTimestamp="2026-02-26 09:42:46 +0000 UTC" firstStartedPulling="2026-02-26 09:42:48.394181379 +0000 UTC m=+5403.390118776" lastFinishedPulling="2026-02-26 09:42:52.922750071 +0000 UTC m=+5407.918687448" observedRunningTime="2026-02-26 09:42:53.508009191 +0000 UTC m=+5408.503946588" watchObservedRunningTime="2026-02-26 09:43:06.685581247 +0000 UTC m=+5421.681518634" Feb 26 09:43:06 crc kubenswrapper[4741]: I0226 09:43:06.718047 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:43:06 crc kubenswrapper[4741]: I0226 09:43:06.906878 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:43:07 crc kubenswrapper[4741]: I0226 09:43:07.148224 4741 scope.go:117] "RemoveContainer" containerID="c212b7d32a3d0c20e4822dc5d36ec107fa59561bb1c8a0602ef189b205b24424" Feb 26 09:43:08 crc kubenswrapper[4741]: I0226 09:43:08.681659 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlfsd" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="registry-server" containerID="cri-o://4a930be1a6dd585b45baf3f4a37eff86faa6222d7fee2dd47506fa876d8c9799" gracePeriod=2 Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.695342 4741 generic.go:334] "Generic (PLEG): container finished" podID="80657694-4c16-478f-8950-fb1d128fd19d" containerID="4a930be1a6dd585b45baf3f4a37eff86faa6222d7fee2dd47506fa876d8c9799" exitCode=0 Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.695878 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerDied","Data":"4a930be1a6dd585b45baf3f4a37eff86faa6222d7fee2dd47506fa876d8c9799"} Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.695910 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlfsd" event={"ID":"80657694-4c16-478f-8950-fb1d128fd19d","Type":"ContainerDied","Data":"fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734"} Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.695921 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa9a08202aa39e74232a192126377e58d80649390085b07e0563d82d2512e734" Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.757062 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.856837 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q68j\" (UniqueName: \"kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j\") pod \"80657694-4c16-478f-8950-fb1d128fd19d\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.856914 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities\") pod \"80657694-4c16-478f-8950-fb1d128fd19d\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.857108 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content\") pod \"80657694-4c16-478f-8950-fb1d128fd19d\" (UID: \"80657694-4c16-478f-8950-fb1d128fd19d\") " Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.859076 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities" (OuterVolumeSpecName: "utilities") pod "80657694-4c16-478f-8950-fb1d128fd19d" (UID: "80657694-4c16-478f-8950-fb1d128fd19d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.953485 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j" (OuterVolumeSpecName: "kube-api-access-2q68j") pod "80657694-4c16-478f-8950-fb1d128fd19d" (UID: "80657694-4c16-478f-8950-fb1d128fd19d"). InnerVolumeSpecName "kube-api-access-2q68j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.979268 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q68j\" (UniqueName: \"kubernetes.io/projected/80657694-4c16-478f-8950-fb1d128fd19d-kube-api-access-2q68j\") on node \"crc\" DevicePath \"\"" Feb 26 09:43:09 crc kubenswrapper[4741]: I0226 09:43:09.979330 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:43:10 crc kubenswrapper[4741]: I0226 09:43:10.001705 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80657694-4c16-478f-8950-fb1d128fd19d" (UID: "80657694-4c16-478f-8950-fb1d128fd19d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:43:10 crc kubenswrapper[4741]: I0226 09:43:10.082090 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80657694-4c16-478f-8950-fb1d128fd19d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:43:10 crc kubenswrapper[4741]: I0226 09:43:10.708666 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlfsd" Feb 26 09:43:10 crc kubenswrapper[4741]: I0226 09:43:10.759393 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:43:10 crc kubenswrapper[4741]: I0226 09:43:10.776746 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlfsd"] Feb 26 09:43:11 crc kubenswrapper[4741]: I0226 09:43:11.801065 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80657694-4c16-478f-8950-fb1d128fd19d" path="/var/lib/kubelet/pods/80657694-4c16-478f-8950-fb1d128fd19d/volumes" Feb 26 09:43:17 crc kubenswrapper[4741]: I0226 09:43:17.788545 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:43:17 crc kubenswrapper[4741]: E0226 09:43:17.790101 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:43:29 crc kubenswrapper[4741]: I0226 09:43:29.788522 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:43:30 crc kubenswrapper[4741]: I0226 09:43:30.966284 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b"} Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.158097 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:43:54 crc kubenswrapper[4741]: E0226 09:43:54.161090 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="extract-utilities" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.161330 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="extract-utilities" Feb 26 09:43:54 crc kubenswrapper[4741]: E0226 09:43:54.161355 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="registry-server" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.161365 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="registry-server" Feb 26 09:43:54 crc kubenswrapper[4741]: E0226 09:43:54.161477 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="extract-content" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.161487 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="extract-content" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.162008 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="80657694-4c16-478f-8950-fb1d128fd19d" containerName="registry-server" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.167373 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.200029 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.287438 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.287888 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.288006 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj9hx\" (UniqueName: \"kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.390932 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj9hx\" (UniqueName: \"kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.391804 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.392259 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.403461 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.408461 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.487887 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj9hx\" (UniqueName: \"kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx\") pod \"redhat-operators-xh2bf\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:54 crc kubenswrapper[4741]: I0226 09:43:54.516772 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:43:55 crc kubenswrapper[4741]: I0226 09:43:55.620611 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:43:55 crc kubenswrapper[4741]: W0226 09:43:55.921688 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63e43835_362a_4db2_831c_7a7f834df459.slice/crio-096dd2e07d72e6f68a704f072e8a508072140949685610ab3ee539586867dd9e WatchSource:0}: Error finding container 096dd2e07d72e6f68a704f072e8a508072140949685610ab3ee539586867dd9e: Status 404 returned error can't find the container with id 096dd2e07d72e6f68a704f072e8a508072140949685610ab3ee539586867dd9e Feb 26 09:43:56 crc kubenswrapper[4741]: I0226 09:43:56.276137 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerStarted","Data":"096dd2e07d72e6f68a704f072e8a508072140949685610ab3ee539586867dd9e"} Feb 26 09:43:57 crc kubenswrapper[4741]: I0226 09:43:57.293381 4741 generic.go:334] "Generic (PLEG): container finished" podID="63e43835-362a-4db2-831c-7a7f834df459" containerID="34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540" exitCode=0 Feb 26 09:43:57 crc kubenswrapper[4741]: I0226 09:43:57.293609 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerDied","Data":"34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540"} Feb 26 09:43:57 crc kubenswrapper[4741]: I0226 09:43:57.297813 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:43:59 crc kubenswrapper[4741]: I0226 09:43:59.318207 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerStarted","Data":"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc"} Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.234715 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534984-fcdl2"] Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.237744 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.256001 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.256306 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.258759 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.263398 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534984-fcdl2"] Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.372342 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64j7g\" (UniqueName: \"kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g\") pod \"auto-csr-approver-29534984-fcdl2\" (UID: \"668366fe-f55b-4fda-8fa2-98fc13dcfa38\") " pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.476103 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64j7g\" (UniqueName: \"kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g\") pod \"auto-csr-approver-29534984-fcdl2\" (UID: \"668366fe-f55b-4fda-8fa2-98fc13dcfa38\") " pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.497145 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64j7g\" (UniqueName: \"kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g\") pod \"auto-csr-approver-29534984-fcdl2\" (UID: \"668366fe-f55b-4fda-8fa2-98fc13dcfa38\") " pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:00 crc kubenswrapper[4741]: I0226 09:44:00.573793 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:01 crc kubenswrapper[4741]: I0226 09:44:01.226859 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534984-fcdl2"] Feb 26 09:44:01 crc kubenswrapper[4741]: W0226 09:44:01.236374 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668366fe_f55b_4fda_8fa2_98fc13dcfa38.slice/crio-e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9 WatchSource:0}: Error finding container e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9: Status 404 returned error can't find the container with id e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9 Feb 26 09:44:01 crc kubenswrapper[4741]: I0226 09:44:01.346281 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" event={"ID":"668366fe-f55b-4fda-8fa2-98fc13dcfa38","Type":"ContainerStarted","Data":"e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9"} Feb 26 09:44:04 crc kubenswrapper[4741]: I0226 09:44:04.405006 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" event={"ID":"668366fe-f55b-4fda-8fa2-98fc13dcfa38","Type":"ContainerStarted","Data":"b8ec27eb60abb2802666b602f9db36886dae7fcedfb62bf93f279cf29bc152d3"} Feb 26 09:44:04 crc kubenswrapper[4741]: I0226 09:44:04.431914 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" podStartSLOduration=3.279708932 podStartE2EDuration="4.431885468s" podCreationTimestamp="2026-02-26 09:44:00 +0000 UTC" firstStartedPulling="2026-02-26 09:44:01.243285639 +0000 UTC m=+5476.239223026" lastFinishedPulling="2026-02-26 09:44:02.395462175 +0000 UTC m=+5477.391399562" observedRunningTime="2026-02-26 09:44:04.424827977 +0000 UTC m=+5479.420765374" watchObservedRunningTime="2026-02-26 09:44:04.431885468 +0000 UTC m=+5479.427822865" Feb 26 09:44:05 crc kubenswrapper[4741]: I0226 09:44:05.421480 4741 generic.go:334] "Generic (PLEG): container finished" podID="63e43835-362a-4db2-831c-7a7f834df459" containerID="94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc" exitCode=0 Feb 26 09:44:05 crc kubenswrapper[4741]: I0226 09:44:05.421540 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerDied","Data":"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc"} Feb 26 09:44:06 crc kubenswrapper[4741]: I0226 09:44:06.440453 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerStarted","Data":"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7"} Feb 26 09:44:06 crc kubenswrapper[4741]: I0226 09:44:06.444256 4741 generic.go:334] "Generic (PLEG): container finished" podID="668366fe-f55b-4fda-8fa2-98fc13dcfa38" containerID="b8ec27eb60abb2802666b602f9db36886dae7fcedfb62bf93f279cf29bc152d3" exitCode=0 Feb 26 09:44:06 crc kubenswrapper[4741]: I0226 09:44:06.444314 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" event={"ID":"668366fe-f55b-4fda-8fa2-98fc13dcfa38","Type":"ContainerDied","Data":"b8ec27eb60abb2802666b602f9db36886dae7fcedfb62bf93f279cf29bc152d3"} Feb 26 09:44:06 crc kubenswrapper[4741]: I0226 09:44:06.485983 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xh2bf" podStartSLOduration=3.935490794 podStartE2EDuration="12.485960101s" podCreationTimestamp="2026-02-26 09:43:54 +0000 UTC" firstStartedPulling="2026-02-26 09:43:57.296372004 +0000 UTC m=+5472.292309391" lastFinishedPulling="2026-02-26 09:44:05.846841311 +0000 UTC m=+5480.842778698" observedRunningTime="2026-02-26 09:44:06.469216085 +0000 UTC m=+5481.465153492" watchObservedRunningTime="2026-02-26 09:44:06.485960101 +0000 UTC m=+5481.481897488" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.001458 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.116294 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64j7g\" (UniqueName: \"kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g\") pod \"668366fe-f55b-4fda-8fa2-98fc13dcfa38\" (UID: \"668366fe-f55b-4fda-8fa2-98fc13dcfa38\") " Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.132552 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g" (OuterVolumeSpecName: "kube-api-access-64j7g") pod "668366fe-f55b-4fda-8fa2-98fc13dcfa38" (UID: "668366fe-f55b-4fda-8fa2-98fc13dcfa38"). InnerVolumeSpecName "kube-api-access-64j7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.221535 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64j7g\" (UniqueName: \"kubernetes.io/projected/668366fe-f55b-4fda-8fa2-98fc13dcfa38-kube-api-access-64j7g\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.472461 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" event={"ID":"668366fe-f55b-4fda-8fa2-98fc13dcfa38","Type":"ContainerDied","Data":"e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9"} Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.472514 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69589ba4b8ac33bb70d6a2b5848dc83958b7f54efab045c64560ad9473869d9" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.472542 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534984-fcdl2" Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.592054 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534978-zjn67"] Feb 26 09:44:08 crc kubenswrapper[4741]: I0226 09:44:08.606133 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534978-zjn67"] Feb 26 09:44:09 crc kubenswrapper[4741]: I0226 09:44:09.803856 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcee539a-3cf8-4355-b9b3-85a3ffcefe07" path="/var/lib/kubelet/pods/bcee539a-3cf8-4355-b9b3-85a3ffcefe07/volumes" Feb 26 09:44:14 crc kubenswrapper[4741]: I0226 09:44:14.517440 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:14 crc kubenswrapper[4741]: I0226 09:44:14.518218 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:15 crc kubenswrapper[4741]: I0226 09:44:15.570089 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh2bf" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" probeResult="failure" output=< Feb 26 09:44:15 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:44:15 crc kubenswrapper[4741]: > Feb 26 09:44:25 crc kubenswrapper[4741]: I0226 09:44:25.951445 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh2bf" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" probeResult="failure" output=< Feb 26 09:44:25 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:44:25 crc kubenswrapper[4741]: > Feb 26 09:44:35 crc kubenswrapper[4741]: I0226 09:44:35.570649 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh2bf" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" probeResult="failure" output=< Feb 26 09:44:35 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:44:35 crc kubenswrapper[4741]: > Feb 26 09:44:44 crc kubenswrapper[4741]: I0226 09:44:44.578809 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:44 crc kubenswrapper[4741]: I0226 09:44:44.651333 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:44 crc kubenswrapper[4741]: I0226 09:44:44.829279 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:44:45 crc kubenswrapper[4741]: I0226 09:44:45.967442 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xh2bf" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" containerID="cri-o://991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7" gracePeriod=2 Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.969607 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.980574 4741 generic.go:334] "Generic (PLEG): container finished" podID="63e43835-362a-4db2-831c-7a7f834df459" containerID="991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7" exitCode=0 Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.980642 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerDied","Data":"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7"} Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.980678 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh2bf" event={"ID":"63e43835-362a-4db2-831c-7a7f834df459","Type":"ContainerDied","Data":"096dd2e07d72e6f68a704f072e8a508072140949685610ab3ee539586867dd9e"} Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.980732 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh2bf" Feb 26 09:44:46 crc kubenswrapper[4741]: I0226 09:44:46.981615 4741 scope.go:117] "RemoveContainer" containerID="991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.036594 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content\") pod \"63e43835-362a-4db2-831c-7a7f834df459\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.036771 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities\") pod \"63e43835-362a-4db2-831c-7a7f834df459\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.037018 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj9hx\" (UniqueName: \"kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx\") pod \"63e43835-362a-4db2-831c-7a7f834df459\" (UID: \"63e43835-362a-4db2-831c-7a7f834df459\") " Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.048713 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities" (OuterVolumeSpecName: "utilities") pod "63e43835-362a-4db2-831c-7a7f834df459" (UID: "63e43835-362a-4db2-831c-7a7f834df459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.056182 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx" (OuterVolumeSpecName: "kube-api-access-nj9hx") pod "63e43835-362a-4db2-831c-7a7f834df459" (UID: "63e43835-362a-4db2-831c-7a7f834df459"). InnerVolumeSpecName "kube-api-access-nj9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.062929 4741 scope.go:117] "RemoveContainer" containerID="94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.142570 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.142873 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj9hx\" (UniqueName: \"kubernetes.io/projected/63e43835-362a-4db2-831c-7a7f834df459-kube-api-access-nj9hx\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.154666 4741 scope.go:117] "RemoveContainer" containerID="34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.195883 4741 scope.go:117] "RemoveContainer" containerID="991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7" Feb 26 09:44:47 crc kubenswrapper[4741]: E0226 09:44:47.199029 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7\": container with ID starting with 991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7 not found: ID does not exist" containerID="991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.199085 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7"} err="failed to get container status \"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7\": rpc error: code = NotFound desc = could not find container \"991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7\": container with ID starting with 991d255973ba480063992d69d1071c9845fe71ac2d64664ad4c0c5aee2ed07d7 not found: ID does not exist" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.199130 4741 scope.go:117] "RemoveContainer" containerID="94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc" Feb 26 09:44:47 crc kubenswrapper[4741]: E0226 09:44:47.199521 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc\": container with ID starting with 94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc not found: ID does not exist" containerID="94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.199640 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc"} err="failed to get container status \"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc\": rpc error: code = NotFound desc = could not find container \"94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc\": container with ID starting with 94034317b672f719501c8008aabd06c20598cc3b058313dbaa5c2ef4fe6322bc not found: ID does not exist" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.199750 4741 scope.go:117] "RemoveContainer" containerID="34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540" Feb 26 09:44:47 crc kubenswrapper[4741]: E0226 09:44:47.200190 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540\": container with ID starting with 34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540 not found: ID does not exist" containerID="34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.200233 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540"} err="failed to get container status \"34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540\": rpc error: code = NotFound desc = could not find container \"34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540\": container with ID starting with 34d5dbbd0cc50d2b80f0fdc5af94f4610fd7cd4a16321c2fbd7fb60ac11b9540 not found: ID does not exist" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.317038 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63e43835-362a-4db2-831c-7a7f834df459" (UID: "63e43835-362a-4db2-831c-7a7f834df459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.349136 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e43835-362a-4db2-831c-7a7f834df459-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.625415 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.642794 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xh2bf"] Feb 26 09:44:47 crc kubenswrapper[4741]: I0226 09:44:47.815211 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e43835-362a-4db2-831c-7a7f834df459" path="/var/lib/kubelet/pods/63e43835-362a-4db2-831c-7a7f834df459/volumes" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.411387 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:44:59 crc kubenswrapper[4741]: E0226 09:44:59.412596 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.412614 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" Feb 26 09:44:59 crc kubenswrapper[4741]: E0226 09:44:59.412653 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="extract-content" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.412661 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="extract-content" Feb 26 09:44:59 crc kubenswrapper[4741]: E0226 09:44:59.412683 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="extract-utilities" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.412692 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="extract-utilities" Feb 26 09:44:59 crc kubenswrapper[4741]: E0226 09:44:59.412709 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668366fe-f55b-4fda-8fa2-98fc13dcfa38" containerName="oc" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.412717 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="668366fe-f55b-4fda-8fa2-98fc13dcfa38" containerName="oc" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.413008 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="668366fe-f55b-4fda-8fa2-98fc13dcfa38" containerName="oc" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.413034 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e43835-362a-4db2-831c-7a7f834df459" containerName="registry-server" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.417008 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.455309 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.525643 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmhm\" (UniqueName: \"kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.525720 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.525881 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.628965 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.629135 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmhm\" (UniqueName: \"kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.629166 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.629998 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.630101 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.652992 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmhm\" (UniqueName: \"kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm\") pod \"certified-operators-9hlht\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:44:59 crc kubenswrapper[4741]: I0226 09:44:59.752102 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.196735 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh"] Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.206538 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.210705 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.210935 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.240236 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh"] Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.261280 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.261382 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mll8f\" (UniqueName: \"kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.261904 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.306023 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.364723 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.364820 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mll8f\" (UniqueName: \"kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.364976 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.366805 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.372133 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.384856 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mll8f\" (UniqueName: \"kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f\") pod \"collect-profiles-29534985-szjrh\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:00 crc kubenswrapper[4741]: I0226 09:45:00.556607 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:01 crc kubenswrapper[4741]: I0226 09:45:01.103306 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh"] Feb 26 09:45:01 crc kubenswrapper[4741]: I0226 09:45:01.178268 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" event={"ID":"48702dac-6b38-4b4f-b001-a1df50fd0883","Type":"ContainerStarted","Data":"875d5f3196c7ea913fbdad08cb5e4d8fa367bb5510e4b6957451298d837540a2"} Feb 26 09:45:01 crc kubenswrapper[4741]: I0226 09:45:01.183005 4741 generic.go:334] "Generic (PLEG): container finished" podID="74d70e30-11e9-4557-bd4c-3639904729df" containerID="d8667528c46b78409a3da226c398149bc8788b4fdb61c4a35f5395eec2e115f6" exitCode=0 Feb 26 09:45:01 crc kubenswrapper[4741]: I0226 09:45:01.183065 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerDied","Data":"d8667528c46b78409a3da226c398149bc8788b4fdb61c4a35f5395eec2e115f6"} Feb 26 09:45:01 crc kubenswrapper[4741]: I0226 09:45:01.183099 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerStarted","Data":"5734322cc3c218788b584380ccf06570128862e40aaeddb7905e7bc72f34b4d2"} Feb 26 09:45:02 crc kubenswrapper[4741]: I0226 09:45:02.197969 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" event={"ID":"48702dac-6b38-4b4f-b001-a1df50fd0883","Type":"ContainerStarted","Data":"7d3e2a370bddd57024cbc4171a4ddf6289c86ba135bd4eac46435b9297fce2d9"} Feb 26 09:45:03 crc kubenswrapper[4741]: I0226 09:45:03.216238 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerStarted","Data":"9ff4c7f53aada399649ecd724e4c7130cf472841519aacdb0c9a1cb40303f27e"} Feb 26 09:45:03 crc kubenswrapper[4741]: I0226 09:45:03.219117 4741 generic.go:334] "Generic (PLEG): container finished" podID="48702dac-6b38-4b4f-b001-a1df50fd0883" containerID="7d3e2a370bddd57024cbc4171a4ddf6289c86ba135bd4eac46435b9297fce2d9" exitCode=0 Feb 26 09:45:03 crc kubenswrapper[4741]: I0226 09:45:03.219159 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" event={"ID":"48702dac-6b38-4b4f-b001-a1df50fd0883","Type":"ContainerDied","Data":"7d3e2a370bddd57024cbc4171a4ddf6289c86ba135bd4eac46435b9297fce2d9"} Feb 26 09:45:03 crc kubenswrapper[4741]: I0226 09:45:03.257015 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" podStartSLOduration=3.256990759 podStartE2EDuration="3.256990759s" podCreationTimestamp="2026-02-26 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:45:02.231417559 +0000 UTC m=+5537.227354956" watchObservedRunningTime="2026-02-26 09:45:03.256990759 +0000 UTC m=+5538.252928136" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.721515 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.817735 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume\") pod \"48702dac-6b38-4b4f-b001-a1df50fd0883\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.817832 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mll8f\" (UniqueName: \"kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f\") pod \"48702dac-6b38-4b4f-b001-a1df50fd0883\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.818207 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume\") pod \"48702dac-6b38-4b4f-b001-a1df50fd0883\" (UID: \"48702dac-6b38-4b4f-b001-a1df50fd0883\") " Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.820154 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume" (OuterVolumeSpecName: "config-volume") pod "48702dac-6b38-4b4f-b001-a1df50fd0883" (UID: "48702dac-6b38-4b4f-b001-a1df50fd0883"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.825215 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f" (OuterVolumeSpecName: "kube-api-access-mll8f") pod "48702dac-6b38-4b4f-b001-a1df50fd0883" (UID: "48702dac-6b38-4b4f-b001-a1df50fd0883"). InnerVolumeSpecName "kube-api-access-mll8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.826810 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "48702dac-6b38-4b4f-b001-a1df50fd0883" (UID: "48702dac-6b38-4b4f-b001-a1df50fd0883"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.921598 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48702dac-6b38-4b4f-b001-a1df50fd0883-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.921641 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mll8f\" (UniqueName: \"kubernetes.io/projected/48702dac-6b38-4b4f-b001-a1df50fd0883-kube-api-access-mll8f\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:04 crc kubenswrapper[4741]: I0226 09:45:04.921656 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48702dac-6b38-4b4f-b001-a1df50fd0883-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.250002 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.249954 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-szjrh" event={"ID":"48702dac-6b38-4b4f-b001-a1df50fd0883","Type":"ContainerDied","Data":"875d5f3196c7ea913fbdad08cb5e4d8fa367bb5510e4b6957451298d837540a2"} Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.250457 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="875d5f3196c7ea913fbdad08cb5e4d8fa367bb5510e4b6957451298d837540a2" Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.255829 4741 generic.go:334] "Generic (PLEG): container finished" podID="74d70e30-11e9-4557-bd4c-3639904729df" containerID="9ff4c7f53aada399649ecd724e4c7130cf472841519aacdb0c9a1cb40303f27e" exitCode=0 Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.255886 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerDied","Data":"9ff4c7f53aada399649ecd724e4c7130cf472841519aacdb0c9a1cb40303f27e"} Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.341842 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94"] Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.354558 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534940-5jt94"] Feb 26 09:45:05 crc kubenswrapper[4741]: I0226 09:45:05.840344 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f162d4a6-995e-4b7a-b735-bab007914a24" path="/var/lib/kubelet/pods/f162d4a6-995e-4b7a-b735-bab007914a24/volumes" Feb 26 09:45:06 crc kubenswrapper[4741]: I0226 09:45:06.278467 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerStarted","Data":"f592b4f344c4a93cd6d7c88bb48a5f4222e810eccc1348ca49376258829f845c"} Feb 26 09:45:06 crc kubenswrapper[4741]: I0226 09:45:06.311877 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9hlht" podStartSLOduration=2.722293294 podStartE2EDuration="7.311849638s" podCreationTimestamp="2026-02-26 09:44:59 +0000 UTC" firstStartedPulling="2026-02-26 09:45:01.195716892 +0000 UTC m=+5536.191654279" lastFinishedPulling="2026-02-26 09:45:05.785273226 +0000 UTC m=+5540.781210623" observedRunningTime="2026-02-26 09:45:06.304003785 +0000 UTC m=+5541.299941172" watchObservedRunningTime="2026-02-26 09:45:06.311849638 +0000 UTC m=+5541.307787025" Feb 26 09:45:07 crc kubenswrapper[4741]: I0226 09:45:07.435045 4741 scope.go:117] "RemoveContainer" containerID="eebec0d89c7ef6f95dfac9110ed52b73e24fb6190eb59cc4fd7f080523bef4e8" Feb 26 09:45:07 crc kubenswrapper[4741]: I0226 09:45:07.525619 4741 scope.go:117] "RemoveContainer" containerID="0366502ebcc456aae1376be25fd9f0433d30f7d434ea9be58fbd91e485ba8922" Feb 26 09:45:09 crc kubenswrapper[4741]: I0226 09:45:09.752401 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:09 crc kubenswrapper[4741]: I0226 09:45:09.752776 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:10 crc kubenswrapper[4741]: I0226 09:45:10.809904 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9hlht" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="registry-server" probeResult="failure" output=< Feb 26 09:45:10 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:45:10 crc kubenswrapper[4741]: > Feb 26 09:45:19 crc kubenswrapper[4741]: I0226 09:45:19.815579 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:19 crc kubenswrapper[4741]: I0226 09:45:19.869642 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:20 crc kubenswrapper[4741]: I0226 09:45:20.085720 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:45:21 crc kubenswrapper[4741]: I0226 09:45:21.492524 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9hlht" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="registry-server" containerID="cri-o://f592b4f344c4a93cd6d7c88bb48a5f4222e810eccc1348ca49376258829f845c" gracePeriod=2 Feb 26 09:45:22 crc kubenswrapper[4741]: I0226 09:45:22.519661 4741 generic.go:334] "Generic (PLEG): container finished" podID="74d70e30-11e9-4557-bd4c-3639904729df" containerID="f592b4f344c4a93cd6d7c88bb48a5f4222e810eccc1348ca49376258829f845c" exitCode=0 Feb 26 09:45:22 crc kubenswrapper[4741]: I0226 09:45:22.520904 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerDied","Data":"f592b4f344c4a93cd6d7c88bb48a5f4222e810eccc1348ca49376258829f845c"} Feb 26 09:45:22 crc kubenswrapper[4741]: I0226 09:45:22.951480 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.017829 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities\") pod \"74d70e30-11e9-4557-bd4c-3639904729df\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.018750 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdmhm\" (UniqueName: \"kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm\") pod \"74d70e30-11e9-4557-bd4c-3639904729df\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.018844 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content\") pod \"74d70e30-11e9-4557-bd4c-3639904729df\" (UID: \"74d70e30-11e9-4557-bd4c-3639904729df\") " Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.019942 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities" (OuterVolumeSpecName: "utilities") pod "74d70e30-11e9-4557-bd4c-3639904729df" (UID: "74d70e30-11e9-4557-bd4c-3639904729df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.020789 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.029962 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm" (OuterVolumeSpecName: "kube-api-access-hdmhm") pod "74d70e30-11e9-4557-bd4c-3639904729df" (UID: "74d70e30-11e9-4557-bd4c-3639904729df"). InnerVolumeSpecName "kube-api-access-hdmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.123588 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdmhm\" (UniqueName: \"kubernetes.io/projected/74d70e30-11e9-4557-bd4c-3639904729df-kube-api-access-hdmhm\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.141534 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74d70e30-11e9-4557-bd4c-3639904729df" (UID: "74d70e30-11e9-4557-bd4c-3639904729df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.227279 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d70e30-11e9-4557-bd4c-3639904729df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.542701 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hlht" event={"ID":"74d70e30-11e9-4557-bd4c-3639904729df","Type":"ContainerDied","Data":"5734322cc3c218788b584380ccf06570128862e40aaeddb7905e7bc72f34b4d2"} Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.542798 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hlht" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.542813 4741 scope.go:117] "RemoveContainer" containerID="f592b4f344c4a93cd6d7c88bb48a5f4222e810eccc1348ca49376258829f845c" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.608136 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.612411 4741 scope.go:117] "RemoveContainer" containerID="9ff4c7f53aada399649ecd724e4c7130cf472841519aacdb0c9a1cb40303f27e" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.631684 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9hlht"] Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.644640 4741 scope.go:117] "RemoveContainer" containerID="d8667528c46b78409a3da226c398149bc8788b4fdb61c4a35f5395eec2e115f6" Feb 26 09:45:23 crc kubenswrapper[4741]: I0226 09:45:23.809053 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d70e30-11e9-4557-bd4c-3639904729df" path="/var/lib/kubelet/pods/74d70e30-11e9-4557-bd4c-3639904729df/volumes" Feb 26 09:45:55 crc kubenswrapper[4741]: I0226 09:45:55.149192 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:45:55 crc kubenswrapper[4741]: I0226 09:45:55.149706 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.164297 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534986-585jh"] Feb 26 09:46:00 crc kubenswrapper[4741]: E0226 09:46:00.165604 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="extract-utilities" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.165626 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="extract-utilities" Feb 26 09:46:00 crc kubenswrapper[4741]: E0226 09:46:00.165686 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="extract-content" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.165695 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="extract-content" Feb 26 09:46:00 crc kubenswrapper[4741]: E0226 09:46:00.165708 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48702dac-6b38-4b4f-b001-a1df50fd0883" containerName="collect-profiles" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.165716 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="48702dac-6b38-4b4f-b001-a1df50fd0883" containerName="collect-profiles" Feb 26 09:46:00 crc kubenswrapper[4741]: E0226 09:46:00.165731 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="registry-server" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.165739 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="registry-server" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.166058 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="48702dac-6b38-4b4f-b001-a1df50fd0883" containerName="collect-profiles" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.166083 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d70e30-11e9-4557-bd4c-3639904729df" containerName="registry-server" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.167954 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.172676 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.172768 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.172677 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.179218 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-585jh"] Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.296575 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtdz\" (UniqueName: \"kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz\") pod \"auto-csr-approver-29534986-585jh\" (UID: \"757dd373-dea8-4ab0-9d57-2ecabbe00046\") " pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.400151 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtdz\" (UniqueName: \"kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz\") pod \"auto-csr-approver-29534986-585jh\" (UID: \"757dd373-dea8-4ab0-9d57-2ecabbe00046\") " pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.422857 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtdz\" (UniqueName: \"kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz\") pod \"auto-csr-approver-29534986-585jh\" (UID: \"757dd373-dea8-4ab0-9d57-2ecabbe00046\") " pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:00 crc kubenswrapper[4741]: I0226 09:46:00.487790 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:01 crc kubenswrapper[4741]: I0226 09:46:01.044961 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-585jh"] Feb 26 09:46:02 crc kubenswrapper[4741]: I0226 09:46:02.015299 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-585jh" event={"ID":"757dd373-dea8-4ab0-9d57-2ecabbe00046","Type":"ContainerStarted","Data":"0d2250985f51a617279ad2d31480ea59bedc24d1a964cb800a3c8dbd48cbb26e"} Feb 26 09:46:03 crc kubenswrapper[4741]: I0226 09:46:03.030044 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-585jh" event={"ID":"757dd373-dea8-4ab0-9d57-2ecabbe00046","Type":"ContainerStarted","Data":"effcbbead1bbe1782c069b1a79d88b559ab1911736913e1a55f7f901fc3e4841"} Feb 26 09:46:03 crc kubenswrapper[4741]: I0226 09:46:03.056311 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534986-585jh" podStartSLOduration=2.028451959 podStartE2EDuration="3.056288463s" podCreationTimestamp="2026-02-26 09:46:00 +0000 UTC" firstStartedPulling="2026-02-26 09:46:01.051523471 +0000 UTC m=+5596.047460888" lastFinishedPulling="2026-02-26 09:46:02.079360005 +0000 UTC m=+5597.075297392" observedRunningTime="2026-02-26 09:46:03.052703171 +0000 UTC m=+5598.048640558" watchObservedRunningTime="2026-02-26 09:46:03.056288463 +0000 UTC m=+5598.052225850" Feb 26 09:46:05 crc kubenswrapper[4741]: I0226 09:46:05.053961 4741 generic.go:334] "Generic (PLEG): container finished" podID="757dd373-dea8-4ab0-9d57-2ecabbe00046" containerID="effcbbead1bbe1782c069b1a79d88b559ab1911736913e1a55f7f901fc3e4841" exitCode=0 Feb 26 09:46:05 crc kubenswrapper[4741]: I0226 09:46:05.054083 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-585jh" event={"ID":"757dd373-dea8-4ab0-9d57-2ecabbe00046","Type":"ContainerDied","Data":"effcbbead1bbe1782c069b1a79d88b559ab1911736913e1a55f7f901fc3e4841"} Feb 26 09:46:06 crc kubenswrapper[4741]: I0226 09:46:06.481934 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:06 crc kubenswrapper[4741]: I0226 09:46:06.584175 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rtdz\" (UniqueName: \"kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz\") pod \"757dd373-dea8-4ab0-9d57-2ecabbe00046\" (UID: \"757dd373-dea8-4ab0-9d57-2ecabbe00046\") " Feb 26 09:46:06 crc kubenswrapper[4741]: I0226 09:46:06.591674 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz" (OuterVolumeSpecName: "kube-api-access-4rtdz") pod "757dd373-dea8-4ab0-9d57-2ecabbe00046" (UID: "757dd373-dea8-4ab0-9d57-2ecabbe00046"). InnerVolumeSpecName "kube-api-access-4rtdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:06 crc kubenswrapper[4741]: I0226 09:46:06.687823 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rtdz\" (UniqueName: \"kubernetes.io/projected/757dd373-dea8-4ab0-9d57-2ecabbe00046-kube-api-access-4rtdz\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.079686 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-585jh" event={"ID":"757dd373-dea8-4ab0-9d57-2ecabbe00046","Type":"ContainerDied","Data":"0d2250985f51a617279ad2d31480ea59bedc24d1a964cb800a3c8dbd48cbb26e"} Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.079935 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2250985f51a617279ad2d31480ea59bedc24d1a964cb800a3c8dbd48cbb26e" Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.079733 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-585jh" Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.146187 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534980-tw7d7"] Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.160522 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534980-tw7d7"] Feb 26 09:46:07 crc kubenswrapper[4741]: I0226 09:46:07.801332 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23965e77-0950-401f-9784-6a86250078e3" path="/var/lib/kubelet/pods/23965e77-0950-401f-9784-6a86250078e3/volumes" Feb 26 09:46:25 crc kubenswrapper[4741]: I0226 09:46:25.149211 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:46:25 crc kubenswrapper[4741]: I0226 09:46:25.150548 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.149638 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.151004 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.151147 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.153229 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.153436 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b" gracePeriod=600 Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.783988 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b" exitCode=0 Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.784156 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b"} Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.784507 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e"} Feb 26 09:46:55 crc kubenswrapper[4741]: I0226 09:46:55.784547 4741 scope.go:117] "RemoveContainer" containerID="c9cc175a735fb987fbb2315d3f059dba76f425f8c41f456bac329a7c6dc673a4" Feb 26 09:47:07 crc kubenswrapper[4741]: I0226 09:47:07.763503 4741 scope.go:117] "RemoveContainer" containerID="782afc8309269b54f954c8d937a774031a1f5cb42071e7b9b5bb2a4abb8d6ecc" Feb 26 09:47:50 crc kubenswrapper[4741]: I0226 09:47:50.949953 4741 trace.go:236] Trace[398168116]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (26-Feb-2026 09:47:49.696) (total time: 1238ms): Feb 26 09:47:50 crc kubenswrapper[4741]: Trace[398168116]: [1.238920912s] [1.238920912s] END Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.178759 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qtrm9"] Feb 26 09:48:00 crc kubenswrapper[4741]: E0226 09:48:00.182817 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757dd373-dea8-4ab0-9d57-2ecabbe00046" containerName="oc" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.183047 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="757dd373-dea8-4ab0-9d57-2ecabbe00046" containerName="oc" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.183808 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="757dd373-dea8-4ab0-9d57-2ecabbe00046" containerName="oc" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.185661 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.189763 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.189763 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.190341 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.193821 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qtrm9"] Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.339065 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdmc9\" (UniqueName: \"kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9\") pod \"auto-csr-approver-29534988-qtrm9\" (UID: \"c850f612-7dc3-4114-b40c-81cbf0926ffc\") " pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.442217 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdmc9\" (UniqueName: \"kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9\") pod \"auto-csr-approver-29534988-qtrm9\" (UID: \"c850f612-7dc3-4114-b40c-81cbf0926ffc\") " pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.475815 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdmc9\" (UniqueName: \"kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9\") pod \"auto-csr-approver-29534988-qtrm9\" (UID: \"c850f612-7dc3-4114-b40c-81cbf0926ffc\") " pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:00 crc kubenswrapper[4741]: I0226 09:48:00.534251 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:01 crc kubenswrapper[4741]: I0226 09:48:01.088803 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qtrm9"] Feb 26 09:48:01 crc kubenswrapper[4741]: I0226 09:48:01.706430 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" event={"ID":"c850f612-7dc3-4114-b40c-81cbf0926ffc","Type":"ContainerStarted","Data":"e59add85bbb5a57449dcaa63e22b8355a51722969a429d8b55bdf4cb5e1b8478"} Feb 26 09:48:02 crc kubenswrapper[4741]: I0226 09:48:02.719494 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" event={"ID":"c850f612-7dc3-4114-b40c-81cbf0926ffc","Type":"ContainerStarted","Data":"7fbed949edf04d32268cfa319499ea46ae904d34d199812a0fdac1365f9cba9e"} Feb 26 09:48:02 crc kubenswrapper[4741]: I0226 09:48:02.742991 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" podStartSLOduration=1.750672743 podStartE2EDuration="2.742968197s" podCreationTimestamp="2026-02-26 09:48:00 +0000 UTC" firstStartedPulling="2026-02-26 09:48:01.103643979 +0000 UTC m=+5716.099581366" lastFinishedPulling="2026-02-26 09:48:02.095939393 +0000 UTC m=+5717.091876820" observedRunningTime="2026-02-26 09:48:02.734035633 +0000 UTC m=+5717.729973020" watchObservedRunningTime="2026-02-26 09:48:02.742968197 +0000 UTC m=+5717.738905584" Feb 26 09:48:03 crc kubenswrapper[4741]: I0226 09:48:03.733198 4741 generic.go:334] "Generic (PLEG): container finished" podID="c850f612-7dc3-4114-b40c-81cbf0926ffc" containerID="7fbed949edf04d32268cfa319499ea46ae904d34d199812a0fdac1365f9cba9e" exitCode=0 Feb 26 09:48:03 crc kubenswrapper[4741]: I0226 09:48:03.733321 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" event={"ID":"c850f612-7dc3-4114-b40c-81cbf0926ffc","Type":"ContainerDied","Data":"7fbed949edf04d32268cfa319499ea46ae904d34d199812a0fdac1365f9cba9e"} Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.017325 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.128732 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdmc9\" (UniqueName: \"kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9\") pod \"c850f612-7dc3-4114-b40c-81cbf0926ffc\" (UID: \"c850f612-7dc3-4114-b40c-81cbf0926ffc\") " Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.136738 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9" (OuterVolumeSpecName: "kube-api-access-kdmc9") pod "c850f612-7dc3-4114-b40c-81cbf0926ffc" (UID: "c850f612-7dc3-4114-b40c-81cbf0926ffc"). InnerVolumeSpecName "kube-api-access-kdmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.234518 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdmc9\" (UniqueName: \"kubernetes.io/projected/c850f612-7dc3-4114-b40c-81cbf0926ffc-kube-api-access-kdmc9\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.788536 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" event={"ID":"c850f612-7dc3-4114-b40c-81cbf0926ffc","Type":"ContainerDied","Data":"e59add85bbb5a57449dcaa63e22b8355a51722969a429d8b55bdf4cb5e1b8478"} Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.788778 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59add85bbb5a57449dcaa63e22b8355a51722969a429d8b55bdf4cb5e1b8478" Feb 26 09:48:06 crc kubenswrapper[4741]: I0226 09:48:06.788556 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qtrm9" Feb 26 09:48:07 crc kubenswrapper[4741]: I0226 09:48:07.110296 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534982-zxrwr"] Feb 26 09:48:07 crc kubenswrapper[4741]: I0226 09:48:07.140405 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534982-zxrwr"] Feb 26 09:48:07 crc kubenswrapper[4741]: I0226 09:48:07.810885 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12005334-faff-4e67-b8ad-b2ae50d7e79f" path="/var/lib/kubelet/pods/12005334-faff-4e67-b8ad-b2ae50d7e79f/volumes" Feb 26 09:48:07 crc kubenswrapper[4741]: I0226 09:48:07.862526 4741 scope.go:117] "RemoveContainer" containerID="dda150a0cbe7ad63ca50eee6bfc2ac737ad8078aeeb820c2c96d65433e410d15" Feb 26 09:48:55 crc kubenswrapper[4741]: I0226 09:48:55.149502 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:48:55 crc kubenswrapper[4741]: I0226 09:48:55.150218 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:49:07 crc kubenswrapper[4741]: I0226 09:49:07.965151 4741 scope.go:117] "RemoveContainer" containerID="dd2c5f54191d2719803d50f0a7d6bdc55dd9e248bcf5e1907ae29a6859e68e90" Feb 26 09:49:08 crc kubenswrapper[4741]: I0226 09:49:08.318143 4741 scope.go:117] "RemoveContainer" containerID="4a930be1a6dd585b45baf3f4a37eff86faa6222d7fee2dd47506fa876d8c9799" Feb 26 09:49:08 crc kubenswrapper[4741]: I0226 09:49:08.345473 4741 scope.go:117] "RemoveContainer" containerID="f334c311ef65c8b681412f16585dd424b88e1bafc5f256b3c4790b937bb8bff2" Feb 26 09:49:25 crc kubenswrapper[4741]: I0226 09:49:25.149641 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:49:25 crc kubenswrapper[4741]: I0226 09:49:25.151385 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.318347 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:49:38 crc kubenswrapper[4741]: E0226 09:49:38.319818 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c850f612-7dc3-4114-b40c-81cbf0926ffc" containerName="oc" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.319834 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="c850f612-7dc3-4114-b40c-81cbf0926ffc" containerName="oc" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.320194 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="c850f612-7dc3-4114-b40c-81cbf0926ffc" containerName="oc" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.322811 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.339562 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.470076 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.470174 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.470390 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5bz\" (UniqueName: \"kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.573894 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv5bz\" (UniqueName: \"kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.574045 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.574100 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.575084 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.575852 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.615638 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv5bz\" (UniqueName: \"kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz\") pod \"redhat-marketplace-lkx4h\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:38 crc kubenswrapper[4741]: I0226 09:49:38.649502 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:39 crc kubenswrapper[4741]: I0226 09:49:39.629935 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:49:40 crc kubenswrapper[4741]: I0226 09:49:40.105828 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerStarted","Data":"2e31e15644f5d8817470a0f63849a3a669139a3f7b377f5afae77a0d51480595"} Feb 26 09:49:41 crc kubenswrapper[4741]: I0226 09:49:41.121171 4741 generic.go:334] "Generic (PLEG): container finished" podID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerID="26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10" exitCode=0 Feb 26 09:49:41 crc kubenswrapper[4741]: I0226 09:49:41.121324 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerDied","Data":"26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10"} Feb 26 09:49:41 crc kubenswrapper[4741]: I0226 09:49:41.126845 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:49:43 crc kubenswrapper[4741]: I0226 09:49:43.150193 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerStarted","Data":"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f"} Feb 26 09:49:44 crc kubenswrapper[4741]: I0226 09:49:44.165727 4741 generic.go:334] "Generic (PLEG): container finished" podID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerID="4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f" exitCode=0 Feb 26 09:49:44 crc kubenswrapper[4741]: I0226 09:49:44.165835 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerDied","Data":"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f"} Feb 26 09:49:46 crc kubenswrapper[4741]: I0226 09:49:46.192262 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerStarted","Data":"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061"} Feb 26 09:49:46 crc kubenswrapper[4741]: I0226 09:49:46.231913 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lkx4h" podStartSLOduration=4.453200209 podStartE2EDuration="8.231884994s" podCreationTimestamp="2026-02-26 09:49:38 +0000 UTC" firstStartedPulling="2026-02-26 09:49:41.12424736 +0000 UTC m=+5816.120184757" lastFinishedPulling="2026-02-26 09:49:44.902932145 +0000 UTC m=+5819.898869542" observedRunningTime="2026-02-26 09:49:46.225542924 +0000 UTC m=+5821.221480321" watchObservedRunningTime="2026-02-26 09:49:46.231884994 +0000 UTC m=+5821.227822381" Feb 26 09:49:48 crc kubenswrapper[4741]: I0226 09:49:48.654091 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:48 crc kubenswrapper[4741]: I0226 09:49:48.654752 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:48 crc kubenswrapper[4741]: I0226 09:49:48.815975 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.149670 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.150338 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.150403 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.151916 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.152008 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" gracePeriod=600 Feb 26 09:49:55 crc kubenswrapper[4741]: E0226 09:49:55.309592 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.314600 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" exitCode=0 Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.314654 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e"} Feb 26 09:49:55 crc kubenswrapper[4741]: I0226 09:49:55.314705 4741 scope.go:117] "RemoveContainer" containerID="267ec551efc0b7a8c9baffc67b612f38cb576ba7cd8097d3c329332400b15e7b" Feb 26 09:49:56 crc kubenswrapper[4741]: I0226 09:49:56.328584 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:49:56 crc kubenswrapper[4741]: E0226 09:49:56.329295 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:49:59 crc kubenswrapper[4741]: I0226 09:49:59.462019 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:49:59 crc kubenswrapper[4741]: I0226 09:49:59.530271 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.151038 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534990-6x9qf"] Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.154535 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.157007 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.158346 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.158352 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.161711 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-6x9qf"] Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.264003 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mszdw\" (UniqueName: \"kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw\") pod \"auto-csr-approver-29534990-6x9qf\" (UID: \"9204c179-3ec8-4aba-87a2-2b95ab4d6552\") " pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.366998 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mszdw\" (UniqueName: \"kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw\") pod \"auto-csr-approver-29534990-6x9qf\" (UID: \"9204c179-3ec8-4aba-87a2-2b95ab4d6552\") " pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.376740 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lkx4h" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="registry-server" containerID="cri-o://85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061" gracePeriod=2 Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.408077 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mszdw\" (UniqueName: \"kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw\") pod \"auto-csr-approver-29534990-6x9qf\" (UID: \"9204c179-3ec8-4aba-87a2-2b95ab4d6552\") " pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:00 crc kubenswrapper[4741]: I0226 09:50:00.488922 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.044300 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.128467 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-6x9qf"] Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.193957 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities\") pod \"6397af62-1b92-42c0-b9b8-911599c8c22c\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.194069 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv5bz\" (UniqueName: \"kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz\") pod \"6397af62-1b92-42c0-b9b8-911599c8c22c\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.194312 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content\") pod \"6397af62-1b92-42c0-b9b8-911599c8c22c\" (UID: \"6397af62-1b92-42c0-b9b8-911599c8c22c\") " Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.195379 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities" (OuterVolumeSpecName: "utilities") pod "6397af62-1b92-42c0-b9b8-911599c8c22c" (UID: "6397af62-1b92-42c0-b9b8-911599c8c22c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.196602 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.206372 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz" (OuterVolumeSpecName: "kube-api-access-sv5bz") pod "6397af62-1b92-42c0-b9b8-911599c8c22c" (UID: "6397af62-1b92-42c0-b9b8-911599c8c22c"). InnerVolumeSpecName "kube-api-access-sv5bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.223887 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6397af62-1b92-42c0-b9b8-911599c8c22c" (UID: "6397af62-1b92-42c0-b9b8-911599c8c22c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.299306 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6397af62-1b92-42c0-b9b8-911599c8c22c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.299388 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv5bz\" (UniqueName: \"kubernetes.io/projected/6397af62-1b92-42c0-b9b8-911599c8c22c-kube-api-access-sv5bz\") on node \"crc\" DevicePath \"\"" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.390137 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" event={"ID":"9204c179-3ec8-4aba-87a2-2b95ab4d6552","Type":"ContainerStarted","Data":"b8d85a0ab51f44d63cce65472a9914c578a0c03d65dff482272d7a37fced4e15"} Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.393827 4741 generic.go:334] "Generic (PLEG): container finished" podID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerID="85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061" exitCode=0 Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.393879 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerDied","Data":"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061"} Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.393911 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkx4h" event={"ID":"6397af62-1b92-42c0-b9b8-911599c8c22c","Type":"ContainerDied","Data":"2e31e15644f5d8817470a0f63849a3a669139a3f7b377f5afae77a0d51480595"} Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.393929 4741 scope.go:117] "RemoveContainer" containerID="85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.394033 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkx4h" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.434355 4741 scope.go:117] "RemoveContainer" containerID="4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.514886 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.515543 4741 scope.go:117] "RemoveContainer" containerID="26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.529050 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkx4h"] Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.564712 4741 scope.go:117] "RemoveContainer" containerID="85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061" Feb 26 09:50:01 crc kubenswrapper[4741]: E0226 09:50:01.568008 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061\": container with ID starting with 85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061 not found: ID does not exist" containerID="85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.568063 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061"} err="failed to get container status \"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061\": rpc error: code = NotFound desc = could not find container \"85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061\": container with ID starting with 85c160f91e49a1c208c32682adc15600be0aa2aa33471de03c91c87929455061 not found: ID does not exist" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.568096 4741 scope.go:117] "RemoveContainer" containerID="4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f" Feb 26 09:50:01 crc kubenswrapper[4741]: E0226 09:50:01.568797 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f\": container with ID starting with 4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f not found: ID does not exist" containerID="4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.568828 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f"} err="failed to get container status \"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f\": rpc error: code = NotFound desc = could not find container \"4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f\": container with ID starting with 4c6151d031940bab91e575339f55973c56bc244e84530ffdb533d80c2e0fff3f not found: ID does not exist" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.568846 4741 scope.go:117] "RemoveContainer" containerID="26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10" Feb 26 09:50:01 crc kubenswrapper[4741]: E0226 09:50:01.569551 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10\": container with ID starting with 26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10 not found: ID does not exist" containerID="26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.569607 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10"} err="failed to get container status \"26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10\": rpc error: code = NotFound desc = could not find container \"26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10\": container with ID starting with 26f08606f924ff23d11651ab9e9b3acc2a8f46409ecd11b3b2773f88b5bd4b10 not found: ID does not exist" Feb 26 09:50:01 crc kubenswrapper[4741]: I0226 09:50:01.802627 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" path="/var/lib/kubelet/pods/6397af62-1b92-42c0-b9b8-911599c8c22c/volumes" Feb 26 09:50:03 crc kubenswrapper[4741]: I0226 09:50:03.424464 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" event={"ID":"9204c179-3ec8-4aba-87a2-2b95ab4d6552","Type":"ContainerStarted","Data":"703a658f682910e12e647a8a2cbad81187870482f274d296f8b4187f68ecd0b8"} Feb 26 09:50:03 crc kubenswrapper[4741]: I0226 09:50:03.440219 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" podStartSLOduration=2.5521240670000003 podStartE2EDuration="3.44018774s" podCreationTimestamp="2026-02-26 09:50:00 +0000 UTC" firstStartedPulling="2026-02-26 09:50:01.133718665 +0000 UTC m=+5836.129656062" lastFinishedPulling="2026-02-26 09:50:02.021782248 +0000 UTC m=+5837.017719735" observedRunningTime="2026-02-26 09:50:03.439026517 +0000 UTC m=+5838.434963904" watchObservedRunningTime="2026-02-26 09:50:03.44018774 +0000 UTC m=+5838.436125127" Feb 26 09:50:04 crc kubenswrapper[4741]: I0226 09:50:04.442096 4741 generic.go:334] "Generic (PLEG): container finished" podID="9204c179-3ec8-4aba-87a2-2b95ab4d6552" containerID="703a658f682910e12e647a8a2cbad81187870482f274d296f8b4187f68ecd0b8" exitCode=0 Feb 26 09:50:04 crc kubenswrapper[4741]: I0226 09:50:04.442259 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" event={"ID":"9204c179-3ec8-4aba-87a2-2b95ab4d6552","Type":"ContainerDied","Data":"703a658f682910e12e647a8a2cbad81187870482f274d296f8b4187f68ecd0b8"} Feb 26 09:50:05 crc kubenswrapper[4741]: I0226 09:50:05.963058 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.043993 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mszdw\" (UniqueName: \"kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw\") pod \"9204c179-3ec8-4aba-87a2-2b95ab4d6552\" (UID: \"9204c179-3ec8-4aba-87a2-2b95ab4d6552\") " Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.051478 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw" (OuterVolumeSpecName: "kube-api-access-mszdw") pod "9204c179-3ec8-4aba-87a2-2b95ab4d6552" (UID: "9204c179-3ec8-4aba-87a2-2b95ab4d6552"). InnerVolumeSpecName "kube-api-access-mszdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.149750 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mszdw\" (UniqueName: \"kubernetes.io/projected/9204c179-3ec8-4aba-87a2-2b95ab4d6552-kube-api-access-mszdw\") on node \"crc\" DevicePath \"\"" Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.479923 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" event={"ID":"9204c179-3ec8-4aba-87a2-2b95ab4d6552","Type":"ContainerDied","Data":"b8d85a0ab51f44d63cce65472a9914c578a0c03d65dff482272d7a37fced4e15"} Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.480325 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d85a0ab51f44d63cce65472a9914c578a0c03d65dff482272d7a37fced4e15" Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.480199 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-6x9qf" Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.598362 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534984-fcdl2"] Feb 26 09:50:06 crc kubenswrapper[4741]: I0226 09:50:06.612437 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534984-fcdl2"] Feb 26 09:50:07 crc kubenswrapper[4741]: I0226 09:50:07.801168 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668366fe-f55b-4fda-8fa2-98fc13dcfa38" path="/var/lib/kubelet/pods/668366fe-f55b-4fda-8fa2-98fc13dcfa38/volumes" Feb 26 09:50:08 crc kubenswrapper[4741]: I0226 09:50:08.447370 4741 scope.go:117] "RemoveContainer" containerID="b8ec27eb60abb2802666b602f9db36886dae7fcedfb62bf93f279cf29bc152d3" Feb 26 09:50:09 crc kubenswrapper[4741]: I0226 09:50:09.787510 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:50:09 crc kubenswrapper[4741]: E0226 09:50:09.789022 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:50:24 crc kubenswrapper[4741]: I0226 09:50:24.787678 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:50:24 crc kubenswrapper[4741]: E0226 09:50:24.788601 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:50:35 crc kubenswrapper[4741]: I0226 09:50:35.796724 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:50:35 crc kubenswrapper[4741]: E0226 09:50:35.797672 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:50:49 crc kubenswrapper[4741]: I0226 09:50:49.786726 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:50:49 crc kubenswrapper[4741]: E0226 09:50:49.787721 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:51:01 crc kubenswrapper[4741]: I0226 09:51:01.787791 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:51:01 crc kubenswrapper[4741]: E0226 09:51:01.788755 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:51:15 crc kubenswrapper[4741]: I0226 09:51:15.798281 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:51:15 crc kubenswrapper[4741]: E0226 09:51:15.799112 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:51:28 crc kubenswrapper[4741]: I0226 09:51:28.787547 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:51:28 crc kubenswrapper[4741]: E0226 09:51:28.788720 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:51:39 crc kubenswrapper[4741]: I0226 09:51:39.787768 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:51:39 crc kubenswrapper[4741]: E0226 09:51:39.788688 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:51:50 crc kubenswrapper[4741]: I0226 09:51:50.790739 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:51:50 crc kubenswrapper[4741]: E0226 09:51:50.791592 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.149141 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534992-mrfb7"] Feb 26 09:52:00 crc kubenswrapper[4741]: E0226 09:52:00.150166 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="registry-server" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150180 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="registry-server" Feb 26 09:52:00 crc kubenswrapper[4741]: E0226 09:52:00.150209 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="extract-utilities" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150216 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="extract-utilities" Feb 26 09:52:00 crc kubenswrapper[4741]: E0226 09:52:00.150225 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9204c179-3ec8-4aba-87a2-2b95ab4d6552" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150232 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9204c179-3ec8-4aba-87a2-2b95ab4d6552" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4741]: E0226 09:52:00.150258 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="extract-content" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150264 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="extract-content" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150505 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9204c179-3ec8-4aba-87a2-2b95ab4d6552" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.150516 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6397af62-1b92-42c0-b9b8-911599c8c22c" containerName="registry-server" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.151417 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.154449 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.154703 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.170093 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.171014 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-mrfb7"] Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.262497 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvcc\" (UniqueName: \"kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc\") pod \"auto-csr-approver-29534992-mrfb7\" (UID: \"514e05a2-2165-4f5a-bdff-92c787343941\") " pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.364342 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvcc\" (UniqueName: \"kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc\") pod \"auto-csr-approver-29534992-mrfb7\" (UID: \"514e05a2-2165-4f5a-bdff-92c787343941\") " pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.395794 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvcc\" (UniqueName: \"kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc\") pod \"auto-csr-approver-29534992-mrfb7\" (UID: \"514e05a2-2165-4f5a-bdff-92c787343941\") " pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.495948 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:00 crc kubenswrapper[4741]: I0226 09:52:00.984588 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-mrfb7"] Feb 26 09:52:01 crc kubenswrapper[4741]: I0226 09:52:01.347080 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" event={"ID":"514e05a2-2165-4f5a-bdff-92c787343941","Type":"ContainerStarted","Data":"8701988d7652c6945f6dc57345046e70fc512e2373640e96608d2bf424c94ae5"} Feb 26 09:52:03 crc kubenswrapper[4741]: I0226 09:52:03.499314 4741 generic.go:334] "Generic (PLEG): container finished" podID="514e05a2-2165-4f5a-bdff-92c787343941" containerID="f487180867be0fff1dd6b7d8d1cf8fc9c29dbd1b500628835d9cb814f42d314a" exitCode=0 Feb 26 09:52:03 crc kubenswrapper[4741]: I0226 09:52:03.499461 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" event={"ID":"514e05a2-2165-4f5a-bdff-92c787343941","Type":"ContainerDied","Data":"f487180867be0fff1dd6b7d8d1cf8fc9c29dbd1b500628835d9cb814f42d314a"} Feb 26 09:52:03 crc kubenswrapper[4741]: I0226 09:52:03.787430 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:52:03 crc kubenswrapper[4741]: E0226 09:52:03.787869 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:52:04 crc kubenswrapper[4741]: I0226 09:52:04.981700 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.133226 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvcc\" (UniqueName: \"kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc\") pod \"514e05a2-2165-4f5a-bdff-92c787343941\" (UID: \"514e05a2-2165-4f5a-bdff-92c787343941\") " Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.142455 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc" (OuterVolumeSpecName: "kube-api-access-kvvcc") pod "514e05a2-2165-4f5a-bdff-92c787343941" (UID: "514e05a2-2165-4f5a-bdff-92c787343941"). InnerVolumeSpecName "kube-api-access-kvvcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.236271 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvcc\" (UniqueName: \"kubernetes.io/projected/514e05a2-2165-4f5a-bdff-92c787343941-kube-api-access-kvvcc\") on node \"crc\" DevicePath \"\"" Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.526156 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" event={"ID":"514e05a2-2165-4f5a-bdff-92c787343941","Type":"ContainerDied","Data":"8701988d7652c6945f6dc57345046e70fc512e2373640e96608d2bf424c94ae5"} Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.526489 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8701988d7652c6945f6dc57345046e70fc512e2373640e96608d2bf424c94ae5" Feb 26 09:52:05 crc kubenswrapper[4741]: I0226 09:52:05.526230 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-mrfb7" Feb 26 09:52:06 crc kubenswrapper[4741]: I0226 09:52:06.068695 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-585jh"] Feb 26 09:52:06 crc kubenswrapper[4741]: I0226 09:52:06.080644 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-585jh"] Feb 26 09:52:07 crc kubenswrapper[4741]: I0226 09:52:07.806775 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757dd373-dea8-4ab0-9d57-2ecabbe00046" path="/var/lib/kubelet/pods/757dd373-dea8-4ab0-9d57-2ecabbe00046/volumes" Feb 26 09:52:08 crc kubenswrapper[4741]: I0226 09:52:08.656800 4741 scope.go:117] "RemoveContainer" containerID="effcbbead1bbe1782c069b1a79d88b559ab1911736913e1a55f7f901fc3e4841" Feb 26 09:52:14 crc kubenswrapper[4741]: I0226 09:52:14.787567 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:52:14 crc kubenswrapper[4741]: E0226 09:52:14.788351 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:52:27 crc kubenswrapper[4741]: I0226 09:52:27.788947 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:52:27 crc kubenswrapper[4741]: E0226 09:52:27.790317 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:52:42 crc kubenswrapper[4741]: I0226 09:52:42.788611 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:52:42 crc kubenswrapper[4741]: E0226 09:52:42.789436 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:52:46 crc kubenswrapper[4741]: I0226 09:52:46.811810 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-x77f8" podUID="c9c57ac4-4382-4a2a-b0c7-8985f71ea615" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:52:53 crc kubenswrapper[4741]: I0226 09:52:53.787986 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:52:53 crc kubenswrapper[4741]: E0226 09:52:53.788991 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:53:06 crc kubenswrapper[4741]: I0226 09:53:06.788643 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:53:06 crc kubenswrapper[4741]: E0226 09:53:06.789655 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:53:21 crc kubenswrapper[4741]: I0226 09:53:21.788261 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:53:21 crc kubenswrapper[4741]: E0226 09:53:21.789790 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.813066 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:53:28 crc kubenswrapper[4741]: E0226 09:53:28.816625 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514e05a2-2165-4f5a-bdff-92c787343941" containerName="oc" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.816664 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="514e05a2-2165-4f5a-bdff-92c787343941" containerName="oc" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.817196 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="514e05a2-2165-4f5a-bdff-92c787343941" containerName="oc" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.819713 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.846468 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.992590 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.992667 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:28 crc kubenswrapper[4741]: I0226 09:53:28.993133 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.095133 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.095230 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.095356 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.095602 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.095799 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.117444 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5\") pod \"community-operators-vd9hl\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.152983 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:29 crc kubenswrapper[4741]: I0226 09:53:29.713947 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:53:30 crc kubenswrapper[4741]: I0226 09:53:30.363745 4741 generic.go:334] "Generic (PLEG): container finished" podID="04ac2623-60d9-4d59-a57a-7a467d975151" containerID="afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc" exitCode=0 Feb 26 09:53:30 crc kubenswrapper[4741]: I0226 09:53:30.364098 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerDied","Data":"afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc"} Feb 26 09:53:30 crc kubenswrapper[4741]: I0226 09:53:30.364149 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerStarted","Data":"d3dbc85a39cccdbe73149e8d0a4aaab9941c055d7d5024d28164cf7d709b65f0"} Feb 26 09:53:32 crc kubenswrapper[4741]: I0226 09:53:32.393161 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerStarted","Data":"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c"} Feb 26 09:53:33 crc kubenswrapper[4741]: I0226 09:53:33.787403 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:53:33 crc kubenswrapper[4741]: E0226 09:53:33.788055 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:53:36 crc kubenswrapper[4741]: I0226 09:53:36.449531 4741 generic.go:334] "Generic (PLEG): container finished" podID="04ac2623-60d9-4d59-a57a-7a467d975151" containerID="02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c" exitCode=0 Feb 26 09:53:36 crc kubenswrapper[4741]: I0226 09:53:36.449651 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerDied","Data":"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c"} Feb 26 09:53:38 crc kubenswrapper[4741]: I0226 09:53:38.474821 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerStarted","Data":"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8"} Feb 26 09:53:38 crc kubenswrapper[4741]: I0226 09:53:38.508933 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vd9hl" podStartSLOduration=3.618895928 podStartE2EDuration="10.508906936s" podCreationTimestamp="2026-02-26 09:53:28 +0000 UTC" firstStartedPulling="2026-02-26 09:53:30.366132004 +0000 UTC m=+6045.362069391" lastFinishedPulling="2026-02-26 09:53:37.256143002 +0000 UTC m=+6052.252080399" observedRunningTime="2026-02-26 09:53:38.496999008 +0000 UTC m=+6053.492936395" watchObservedRunningTime="2026-02-26 09:53:38.508906936 +0000 UTC m=+6053.504844323" Feb 26 09:53:39 crc kubenswrapper[4741]: I0226 09:53:39.153180 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:39 crc kubenswrapper[4741]: I0226 09:53:39.153241 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:40 crc kubenswrapper[4741]: I0226 09:53:40.211591 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vd9hl" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" probeResult="failure" output=< Feb 26 09:53:40 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:53:40 crc kubenswrapper[4741]: > Feb 26 09:53:46 crc kubenswrapper[4741]: I0226 09:53:46.788238 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:53:46 crc kubenswrapper[4741]: E0226 09:53:46.789346 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:53:50 crc kubenswrapper[4741]: I0226 09:53:50.218386 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vd9hl" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" probeResult="failure" output=< Feb 26 09:53:50 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:53:50 crc kubenswrapper[4741]: > Feb 26 09:53:59 crc kubenswrapper[4741]: I0226 09:53:59.215178 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:53:59 crc kubenswrapper[4741]: I0226 09:53:59.274043 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.012479 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.168228 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534994-5cvc9"] Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.170422 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.175380 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.175672 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.175832 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.181330 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-5cvc9"] Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.234933 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ggr\" (UniqueName: \"kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr\") pod \"auto-csr-approver-29534994-5cvc9\" (UID: \"37a463cc-32ff-4172-90eb-6aba60242097\") " pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.338264 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8ggr\" (UniqueName: \"kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr\") pod \"auto-csr-approver-29534994-5cvc9\" (UID: \"37a463cc-32ff-4172-90eb-6aba60242097\") " pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.360010 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8ggr\" (UniqueName: \"kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr\") pod \"auto-csr-approver-29534994-5cvc9\" (UID: \"37a463cc-32ff-4172-90eb-6aba60242097\") " pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.495891 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:00 crc kubenswrapper[4741]: I0226 09:54:00.821668 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vd9hl" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" containerID="cri-o://a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8" gracePeriod=2 Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.217876 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-5cvc9"] Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.304009 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.372145 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5\") pod \"04ac2623-60d9-4d59-a57a-7a467d975151\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.372310 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities\") pod \"04ac2623-60d9-4d59-a57a-7a467d975151\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.372347 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content\") pod \"04ac2623-60d9-4d59-a57a-7a467d975151\" (UID: \"04ac2623-60d9-4d59-a57a-7a467d975151\") " Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.374041 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities" (OuterVolumeSpecName: "utilities") pod "04ac2623-60d9-4d59-a57a-7a467d975151" (UID: "04ac2623-60d9-4d59-a57a-7a467d975151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.380411 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5" (OuterVolumeSpecName: "kube-api-access-l85r5") pod "04ac2623-60d9-4d59-a57a-7a467d975151" (UID: "04ac2623-60d9-4d59-a57a-7a467d975151"). InnerVolumeSpecName "kube-api-access-l85r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.435733 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04ac2623-60d9-4d59-a57a-7a467d975151" (UID: "04ac2623-60d9-4d59-a57a-7a467d975151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.475700 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/04ac2623-60d9-4d59-a57a-7a467d975151-kube-api-access-l85r5\") on node \"crc\" DevicePath \"\"" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.475742 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.475753 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ac2623-60d9-4d59-a57a-7a467d975151-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.786909 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:54:01 crc kubenswrapper[4741]: E0226 09:54:01.787369 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.837616 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd9hl" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.837455 4741 generic.go:334] "Generic (PLEG): container finished" podID="04ac2623-60d9-4d59-a57a-7a467d975151" containerID="a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8" exitCode=0 Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.837651 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerDied","Data":"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8"} Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.838679 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd9hl" event={"ID":"04ac2623-60d9-4d59-a57a-7a467d975151","Type":"ContainerDied","Data":"d3dbc85a39cccdbe73149e8d0a4aaab9941c055d7d5024d28164cf7d709b65f0"} Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.838719 4741 scope.go:117] "RemoveContainer" containerID="a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.842039 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" event={"ID":"37a463cc-32ff-4172-90eb-6aba60242097","Type":"ContainerStarted","Data":"02e4f1213844acedde8cab5a333a3f9ceb38a5ced8b3d4374ac671791959e71e"} Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.882748 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.888688 4741 scope.go:117] "RemoveContainer" containerID="02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.898820 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vd9hl"] Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.915650 4741 scope.go:117] "RemoveContainer" containerID="afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.973933 4741 scope.go:117] "RemoveContainer" containerID="a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8" Feb 26 09:54:01 crc kubenswrapper[4741]: E0226 09:54:01.974749 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8\": container with ID starting with a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8 not found: ID does not exist" containerID="a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.974790 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8"} err="failed to get container status \"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8\": rpc error: code = NotFound desc = could not find container \"a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8\": container with ID starting with a5ec93e1c6b6b6a441031432d9497ec23d46866a5b432a5e96b75394bedf9ed8 not found: ID does not exist" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.974827 4741 scope.go:117] "RemoveContainer" containerID="02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c" Feb 26 09:54:01 crc kubenswrapper[4741]: E0226 09:54:01.975611 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c\": container with ID starting with 02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c not found: ID does not exist" containerID="02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.975647 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c"} err="failed to get container status \"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c\": rpc error: code = NotFound desc = could not find container \"02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c\": container with ID starting with 02e2350d71d98d97b33ad39b7d1bf42b16902038b0e35d7f96c60326aa66241c not found: ID does not exist" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.975672 4741 scope.go:117] "RemoveContainer" containerID="afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc" Feb 26 09:54:01 crc kubenswrapper[4741]: E0226 09:54:01.976098 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc\": container with ID starting with afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc not found: ID does not exist" containerID="afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc" Feb 26 09:54:01 crc kubenswrapper[4741]: I0226 09:54:01.976140 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc"} err="failed to get container status \"afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc\": rpc error: code = NotFound desc = could not find container \"afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc\": container with ID starting with afe5db11be0b3194ddff98d9e31069fc36a854316629c98759f972985d6d51fc not found: ID does not exist" Feb 26 09:54:02 crc kubenswrapper[4741]: I0226 09:54:02.864056 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" event={"ID":"37a463cc-32ff-4172-90eb-6aba60242097","Type":"ContainerStarted","Data":"bbe5839017308bcddbabd1f58ac751552b96cfdb61a6a90dc9e7e84e4f05269f"} Feb 26 09:54:02 crc kubenswrapper[4741]: I0226 09:54:02.891226 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" podStartSLOduration=1.785263034 podStartE2EDuration="2.891179437s" podCreationTimestamp="2026-02-26 09:54:00 +0000 UTC" firstStartedPulling="2026-02-26 09:54:01.229637626 +0000 UTC m=+6076.225575023" lastFinishedPulling="2026-02-26 09:54:02.335554029 +0000 UTC m=+6077.331491426" observedRunningTime="2026-02-26 09:54:02.879156195 +0000 UTC m=+6077.875093572" watchObservedRunningTime="2026-02-26 09:54:02.891179437 +0000 UTC m=+6077.887116824" Feb 26 09:54:03 crc kubenswrapper[4741]: I0226 09:54:03.805665 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" path="/var/lib/kubelet/pods/04ac2623-60d9-4d59-a57a-7a467d975151/volumes" Feb 26 09:54:04 crc kubenswrapper[4741]: I0226 09:54:04.889291 4741 generic.go:334] "Generic (PLEG): container finished" podID="37a463cc-32ff-4172-90eb-6aba60242097" containerID="bbe5839017308bcddbabd1f58ac751552b96cfdb61a6a90dc9e7e84e4f05269f" exitCode=0 Feb 26 09:54:04 crc kubenswrapper[4741]: I0226 09:54:04.889355 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" event={"ID":"37a463cc-32ff-4172-90eb-6aba60242097","Type":"ContainerDied","Data":"bbe5839017308bcddbabd1f58ac751552b96cfdb61a6a90dc9e7e84e4f05269f"} Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.357609 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.451969 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8ggr\" (UniqueName: \"kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr\") pod \"37a463cc-32ff-4172-90eb-6aba60242097\" (UID: \"37a463cc-32ff-4172-90eb-6aba60242097\") " Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.465482 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr" (OuterVolumeSpecName: "kube-api-access-w8ggr") pod "37a463cc-32ff-4172-90eb-6aba60242097" (UID: "37a463cc-32ff-4172-90eb-6aba60242097"). InnerVolumeSpecName "kube-api-access-w8ggr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.555699 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8ggr\" (UniqueName: \"kubernetes.io/projected/37a463cc-32ff-4172-90eb-6aba60242097-kube-api-access-w8ggr\") on node \"crc\" DevicePath \"\"" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.592368 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:54:06 crc kubenswrapper[4741]: E0226 09:54:06.593362 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="extract-content" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.593474 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="extract-content" Feb 26 09:54:06 crc kubenswrapper[4741]: E0226 09:54:06.593552 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.593607 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" Feb 26 09:54:06 crc kubenswrapper[4741]: E0226 09:54:06.593699 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a463cc-32ff-4172-90eb-6aba60242097" containerName="oc" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.593766 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a463cc-32ff-4172-90eb-6aba60242097" containerName="oc" Feb 26 09:54:06 crc kubenswrapper[4741]: E0226 09:54:06.593834 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="extract-utilities" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.593895 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="extract-utilities" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.594224 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ac2623-60d9-4d59-a57a-7a467d975151" containerName="registry-server" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.594327 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a463cc-32ff-4172-90eb-6aba60242097" containerName="oc" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.596784 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.618449 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.659690 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.659825 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.659934 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjk7j\" (UniqueName: \"kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.761545 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.761691 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjk7j\" (UniqueName: \"kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.761908 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.762408 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.762585 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.786505 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjk7j\" (UniqueName: \"kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j\") pod \"redhat-operators-cwn68\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.918746 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" event={"ID":"37a463cc-32ff-4172-90eb-6aba60242097","Type":"ContainerDied","Data":"02e4f1213844acedde8cab5a333a3f9ceb38a5ced8b3d4374ac671791959e71e"} Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.918817 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e4f1213844acedde8cab5a333a3f9ceb38a5ced8b3d4374ac671791959e71e" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.919276 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-5cvc9" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.920266 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:06 crc kubenswrapper[4741]: I0226 09:54:06.994815 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qtrm9"] Feb 26 09:54:07 crc kubenswrapper[4741]: I0226 09:54:07.008817 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qtrm9"] Feb 26 09:54:07 crc kubenswrapper[4741]: I0226 09:54:07.478015 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:54:07 crc kubenswrapper[4741]: W0226 09:54:07.485333 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2816b6_3316_40e6_9458_2d9a2b2d0221.slice/crio-fff2c9319b8a19a5a9123f995469a76c4e8979f10a5c4a918cb02633c23beb62 WatchSource:0}: Error finding container fff2c9319b8a19a5a9123f995469a76c4e8979f10a5c4a918cb02633c23beb62: Status 404 returned error can't find the container with id fff2c9319b8a19a5a9123f995469a76c4e8979f10a5c4a918cb02633c23beb62 Feb 26 09:54:07 crc kubenswrapper[4741]: I0226 09:54:07.804745 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c850f612-7dc3-4114-b40c-81cbf0926ffc" path="/var/lib/kubelet/pods/c850f612-7dc3-4114-b40c-81cbf0926ffc/volumes" Feb 26 09:54:07 crc kubenswrapper[4741]: I0226 09:54:07.936782 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerStarted","Data":"fff2c9319b8a19a5a9123f995469a76c4e8979f10a5c4a918cb02633c23beb62"} Feb 26 09:54:08 crc kubenswrapper[4741]: I0226 09:54:08.952264 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerID="6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266" exitCode=0 Feb 26 09:54:08 crc kubenswrapper[4741]: I0226 09:54:08.952370 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerDied","Data":"6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266"} Feb 26 09:54:09 crc kubenswrapper[4741]: I0226 09:54:09.075538 4741 scope.go:117] "RemoveContainer" containerID="7fbed949edf04d32268cfa319499ea46ae904d34d199812a0fdac1365f9cba9e" Feb 26 09:54:09 crc kubenswrapper[4741]: I0226 09:54:09.968755 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerStarted","Data":"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3"} Feb 26 09:54:12 crc kubenswrapper[4741]: I0226 09:54:12.788576 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:54:12 crc kubenswrapper[4741]: E0226 09:54:12.789992 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:54:16 crc kubenswrapper[4741]: I0226 09:54:16.038791 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerID="d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3" exitCode=0 Feb 26 09:54:16 crc kubenswrapper[4741]: I0226 09:54:16.038920 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerDied","Data":"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3"} Feb 26 09:54:17 crc kubenswrapper[4741]: I0226 09:54:17.056170 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerStarted","Data":"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa"} Feb 26 09:54:17 crc kubenswrapper[4741]: I0226 09:54:17.090161 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwn68" podStartSLOduration=3.5900815440000002 podStartE2EDuration="11.090102453s" podCreationTimestamp="2026-02-26 09:54:06 +0000 UTC" firstStartedPulling="2026-02-26 09:54:08.955485193 +0000 UTC m=+6083.951422600" lastFinishedPulling="2026-02-26 09:54:16.455506112 +0000 UTC m=+6091.451443509" observedRunningTime="2026-02-26 09:54:17.076910529 +0000 UTC m=+6092.072847936" watchObservedRunningTime="2026-02-26 09:54:17.090102453 +0000 UTC m=+6092.086039830" Feb 26 09:54:23 crc kubenswrapper[4741]: I0226 09:54:23.787613 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:54:23 crc kubenswrapper[4741]: E0226 09:54:23.788738 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:54:26 crc kubenswrapper[4741]: I0226 09:54:26.921265 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:26 crc kubenswrapper[4741]: I0226 09:54:26.921974 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:54:27 crc kubenswrapper[4741]: I0226 09:54:27.974485 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwn68" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" probeResult="failure" output=< Feb 26 09:54:27 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:54:27 crc kubenswrapper[4741]: > Feb 26 09:54:34 crc kubenswrapper[4741]: I0226 09:54:34.787560 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:54:34 crc kubenswrapper[4741]: E0226 09:54:34.788237 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:54:38 crc kubenswrapper[4741]: I0226 09:54:38.484671 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwn68" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" probeResult="failure" output=< Feb 26 09:54:38 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:54:38 crc kubenswrapper[4741]: > Feb 26 09:54:47 crc kubenswrapper[4741]: I0226 09:54:47.982842 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwn68" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" probeResult="failure" output=< Feb 26 09:54:47 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:54:47 crc kubenswrapper[4741]: > Feb 26 09:54:48 crc kubenswrapper[4741]: I0226 09:54:48.787599 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:54:48 crc kubenswrapper[4741]: E0226 09:54:48.788419 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 09:54:57 crc kubenswrapper[4741]: I0226 09:54:57.979537 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwn68" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" probeResult="failure" output=< Feb 26 09:54:57 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 09:54:57 crc kubenswrapper[4741]: > Feb 26 09:55:03 crc kubenswrapper[4741]: I0226 09:55:03.788098 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:55:05 crc kubenswrapper[4741]: I0226 09:55:05.671481 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6"} Feb 26 09:55:06 crc kubenswrapper[4741]: I0226 09:55:06.976049 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:55:07 crc kubenswrapper[4741]: I0226 09:55:07.044981 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:55:07 crc kubenswrapper[4741]: I0226 09:55:07.819279 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:55:08 crc kubenswrapper[4741]: I0226 09:55:08.705048 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwn68" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" containerID="cri-o://a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa" gracePeriod=2 Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.649040 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.728690 4741 generic.go:334] "Generic (PLEG): container finished" podID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerID="a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa" exitCode=0 Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.728773 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerDied","Data":"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa"} Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.728810 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn68" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.728828 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn68" event={"ID":"ef2816b6-3316-40e6-9458-2d9a2b2d0221","Type":"ContainerDied","Data":"fff2c9319b8a19a5a9123f995469a76c4e8979f10a5c4a918cb02633c23beb62"} Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.728850 4741 scope.go:117] "RemoveContainer" containerID="a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.758497 4741 scope.go:117] "RemoveContainer" containerID="d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.764486 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjk7j\" (UniqueName: \"kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j\") pod \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.766577 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities\") pod \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.767118 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content\") pod \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\" (UID: \"ef2816b6-3316-40e6-9458-2d9a2b2d0221\") " Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.768019 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities" (OuterVolumeSpecName: "utilities") pod "ef2816b6-3316-40e6-9458-2d9a2b2d0221" (UID: "ef2816b6-3316-40e6-9458-2d9a2b2d0221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.768826 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.774647 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j" (OuterVolumeSpecName: "kube-api-access-zjk7j") pod "ef2816b6-3316-40e6-9458-2d9a2b2d0221" (UID: "ef2816b6-3316-40e6-9458-2d9a2b2d0221"). InnerVolumeSpecName "kube-api-access-zjk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.781683 4741 scope.go:117] "RemoveContainer" containerID="6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.871499 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjk7j\" (UniqueName: \"kubernetes.io/projected/ef2816b6-3316-40e6-9458-2d9a2b2d0221-kube-api-access-zjk7j\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.903744 4741 scope.go:117] "RemoveContainer" containerID="a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa" Feb 26 09:55:09 crc kubenswrapper[4741]: E0226 09:55:09.904365 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa\": container with ID starting with a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa not found: ID does not exist" containerID="a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.904413 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa"} err="failed to get container status \"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa\": rpc error: code = NotFound desc = could not find container \"a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa\": container with ID starting with a14080d27d1298aaaf1e1fdad62f9c8dd33817ed6354b442e4991706d8415ffa not found: ID does not exist" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.904435 4741 scope.go:117] "RemoveContainer" containerID="d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3" Feb 26 09:55:09 crc kubenswrapper[4741]: E0226 09:55:09.904883 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3\": container with ID starting with d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3 not found: ID does not exist" containerID="d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.904899 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3"} err="failed to get container status \"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3\": rpc error: code = NotFound desc = could not find container \"d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3\": container with ID starting with d19b1cc1d91c64ba23d5278ef539e7e148b11f09fe0dc8a7a061c1f4d1b393c3 not found: ID does not exist" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.904911 4741 scope.go:117] "RemoveContainer" containerID="6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266" Feb 26 09:55:09 crc kubenswrapper[4741]: E0226 09:55:09.905250 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266\": container with ID starting with 6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266 not found: ID does not exist" containerID="6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.905309 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266"} err="failed to get container status \"6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266\": rpc error: code = NotFound desc = could not find container \"6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266\": container with ID starting with 6917afc05239a29ccecd9c0f958b9017426fe88247762ff2483b24d68bdc7266 not found: ID does not exist" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.906645 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef2816b6-3316-40e6-9458-2d9a2b2d0221" (UID: "ef2816b6-3316-40e6-9458-2d9a2b2d0221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:55:09 crc kubenswrapper[4741]: I0226 09:55:09.973807 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2816b6-3316-40e6-9458-2d9a2b2d0221-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:10 crc kubenswrapper[4741]: I0226 09:55:10.083145 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:55:10 crc kubenswrapper[4741]: I0226 09:55:10.097597 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwn68"] Feb 26 09:55:11 crc kubenswrapper[4741]: I0226 09:55:11.803121 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" path="/var/lib/kubelet/pods/ef2816b6-3316-40e6-9458-2d9a2b2d0221/volumes" Feb 26 09:55:23 crc kubenswrapper[4741]: I0226 09:55:23.963094 4741 generic.go:334] "Generic (PLEG): container finished" podID="5b00e4ed-4b6a-4871-b454-dec4229deb64" containerID="130ffb6ef33dfc077043c0c737d6caf31aa7ae15c6596487ce4923fb710eb20a" exitCode=1 Feb 26 09:55:23 crc kubenswrapper[4741]: I0226 09:55:23.963538 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5b00e4ed-4b6a-4871-b454-dec4229deb64","Type":"ContainerDied","Data":"130ffb6ef33dfc077043c0c737d6caf31aa7ae15c6596487ce4923fb710eb20a"} Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.411526 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548328 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548434 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548551 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548693 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548730 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548751 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548786 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.548876 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hlww\" (UniqueName: \"kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.549102 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret\") pod \"5b00e4ed-4b6a-4871-b454-dec4229deb64\" (UID: \"5b00e4ed-4b6a-4871-b454-dec4229deb64\") " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.551200 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.551748 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data" (OuterVolumeSpecName: "config-data") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.555168 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.559284 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww" (OuterVolumeSpecName: "kube-api-access-2hlww") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "kube-api-access-2hlww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.561990 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.593709 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.593857 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.598300 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.624679 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5b00e4ed-4b6a-4871-b454-dec4229deb64" (UID: "5b00e4ed-4b6a-4871-b454-dec4229deb64"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653694 4741 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653737 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653754 4741 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653765 4741 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653775 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hlww\" (UniqueName: \"kubernetes.io/projected/5b00e4ed-4b6a-4871-b454-dec4229deb64-kube-api-access-2hlww\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653786 4741 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653796 4741 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b00e4ed-4b6a-4871-b454-dec4229deb64-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653806 4741 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5b00e4ed-4b6a-4871-b454-dec4229deb64-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.653815 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b00e4ed-4b6a-4871-b454-dec4229deb64-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.681802 4741 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.755230 4741 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.997399 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5b00e4ed-4b6a-4871-b454-dec4229deb64","Type":"ContainerDied","Data":"cc072564262ce9da373a546de1587207134972131a19af4b269699f21eeeafa7"} Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.997796 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc072564262ce9da373a546de1587207134972131a19af4b269699f21eeeafa7" Feb 26 09:55:25 crc kubenswrapper[4741]: I0226 09:55:25.997448 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.543073 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 09:55:34 crc kubenswrapper[4741]: E0226 09:55:34.544333 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b00e4ed-4b6a-4871-b454-dec4229deb64" containerName="tempest-tests-tempest-tests-runner" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544354 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b00e4ed-4b6a-4871-b454-dec4229deb64" containerName="tempest-tests-tempest-tests-runner" Feb 26 09:55:34 crc kubenswrapper[4741]: E0226 09:55:34.544374 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="extract-utilities" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544381 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="extract-utilities" Feb 26 09:55:34 crc kubenswrapper[4741]: E0226 09:55:34.544399 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="extract-content" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544405 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="extract-content" Feb 26 09:55:34 crc kubenswrapper[4741]: E0226 09:55:34.544435 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544441 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544742 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b00e4ed-4b6a-4871-b454-dec4229deb64" containerName="tempest-tests-tempest-tests-runner" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.544767 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2816b6-3316-40e6-9458-2d9a2b2d0221" containerName="registry-server" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.545855 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.549267 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-948fk" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.673075 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.673382 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqt9\" (UniqueName: \"kubernetes.io/projected/7e72cb7c-ec1b-463c-96ee-7d6e64f805b1-kube-api-access-jmqt9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.675910 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.776452 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqt9\" (UniqueName: \"kubernetes.io/projected/7e72cb7c-ec1b-463c-96ee-7d6e64f805b1-kube-api-access-jmqt9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.776531 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.779655 4741 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.806809 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqt9\" (UniqueName: \"kubernetes.io/projected/7e72cb7c-ec1b-463c-96ee-7d6e64f805b1-kube-api-access-jmqt9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.816922 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:34 crc kubenswrapper[4741]: I0226 09:55:34.870599 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 09:55:35 crc kubenswrapper[4741]: I0226 09:55:35.401685 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 09:55:35 crc kubenswrapper[4741]: I0226 09:55:35.412451 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:55:36 crc kubenswrapper[4741]: I0226 09:55:36.125877 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1","Type":"ContainerStarted","Data":"4063a2724a22b0208cb67b6ddff946f3e4f9a9f7d37c8e00909dc9df1566bb3c"} Feb 26 09:55:38 crc kubenswrapper[4741]: I0226 09:55:38.176657 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7e72cb7c-ec1b-463c-96ee-7d6e64f805b1","Type":"ContainerStarted","Data":"983d2353ddd6515dd0781bde41bfe9d3a7e4cd48ec8322f674894a106b7a6e50"} Feb 26 09:55:38 crc kubenswrapper[4741]: I0226 09:55:38.196614 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.712395892 podStartE2EDuration="4.196586272s" podCreationTimestamp="2026-02-26 09:55:34 +0000 UTC" firstStartedPulling="2026-02-26 09:55:35.41222565 +0000 UTC m=+6170.408163037" lastFinishedPulling="2026-02-26 09:55:36.89641603 +0000 UTC m=+6171.892353417" observedRunningTime="2026-02-26 09:55:38.191158858 +0000 UTC m=+6173.187096245" watchObservedRunningTime="2026-02-26 09:55:38.196586272 +0000 UTC m=+6173.192523659" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.191430 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534996-cqzpn"] Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.194853 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.197869 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.198104 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.200085 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.206328 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534996-cqzpn"] Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.293184 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpr55\" (UniqueName: \"kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55\") pod \"auto-csr-approver-29534996-cqzpn\" (UID: \"3e032146-0ca3-4914-9303-16d588739174\") " pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.396419 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpr55\" (UniqueName: \"kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55\") pod \"auto-csr-approver-29534996-cqzpn\" (UID: \"3e032146-0ca3-4914-9303-16d588739174\") " pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.424594 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpr55\" (UniqueName: \"kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55\") pod \"auto-csr-approver-29534996-cqzpn\" (UID: \"3e032146-0ca3-4914-9303-16d588739174\") " pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:00 crc kubenswrapper[4741]: I0226 09:56:00.519687 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:01 crc kubenswrapper[4741]: I0226 09:56:01.075277 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534996-cqzpn"] Feb 26 09:56:01 crc kubenswrapper[4741]: I0226 09:56:01.487203 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" event={"ID":"3e032146-0ca3-4914-9303-16d588739174","Type":"ContainerStarted","Data":"21fc0736172d2a8bc89725a66dfe2362cb0e12292e70d5328ad2cdcea7246910"} Feb 26 09:56:03 crc kubenswrapper[4741]: I0226 09:56:03.526188 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" event={"ID":"3e032146-0ca3-4914-9303-16d588739174","Type":"ContainerStarted","Data":"d96a31babe63bef2ab06f2d7fd616b418f13f328d0435e57913b34cad6a0840c"} Feb 26 09:56:03 crc kubenswrapper[4741]: I0226 09:56:03.562002 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" podStartSLOduration=2.582979971 podStartE2EDuration="3.561974186s" podCreationTimestamp="2026-02-26 09:56:00 +0000 UTC" firstStartedPulling="2026-02-26 09:56:01.084328039 +0000 UTC m=+6196.080265426" lastFinishedPulling="2026-02-26 09:56:02.063322254 +0000 UTC m=+6197.059259641" observedRunningTime="2026-02-26 09:56:03.550777658 +0000 UTC m=+6198.546715045" watchObservedRunningTime="2026-02-26 09:56:03.561974186 +0000 UTC m=+6198.557911573" Feb 26 09:56:04 crc kubenswrapper[4741]: I0226 09:56:04.554407 4741 generic.go:334] "Generic (PLEG): container finished" podID="3e032146-0ca3-4914-9303-16d588739174" containerID="d96a31babe63bef2ab06f2d7fd616b418f13f328d0435e57913b34cad6a0840c" exitCode=0 Feb 26 09:56:04 crc kubenswrapper[4741]: I0226 09:56:04.554473 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" event={"ID":"3e032146-0ca3-4914-9303-16d588739174","Type":"ContainerDied","Data":"d96a31babe63bef2ab06f2d7fd616b418f13f328d0435e57913b34cad6a0840c"} Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.040232 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.079130 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpr55\" (UniqueName: \"kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55\") pod \"3e032146-0ca3-4914-9303-16d588739174\" (UID: \"3e032146-0ca3-4914-9303-16d588739174\") " Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.085668 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55" (OuterVolumeSpecName: "kube-api-access-hpr55") pod "3e032146-0ca3-4914-9303-16d588739174" (UID: "3e032146-0ca3-4914-9303-16d588739174"). InnerVolumeSpecName "kube-api-access-hpr55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.184100 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpr55\" (UniqueName: \"kubernetes.io/projected/3e032146-0ca3-4914-9303-16d588739174-kube-api-access-hpr55\") on node \"crc\" DevicePath \"\"" Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.588474 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" event={"ID":"3e032146-0ca3-4914-9303-16d588739174","Type":"ContainerDied","Data":"21fc0736172d2a8bc89725a66dfe2362cb0e12292e70d5328ad2cdcea7246910"} Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.588971 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21fc0736172d2a8bc89725a66dfe2362cb0e12292e70d5328ad2cdcea7246910" Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.589047 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534996-cqzpn" Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.643082 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-6x9qf"] Feb 26 09:56:06 crc kubenswrapper[4741]: I0226 09:56:06.660643 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-6x9qf"] Feb 26 09:56:07 crc kubenswrapper[4741]: I0226 09:56:07.804063 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9204c179-3ec8-4aba-87a2-2b95ab4d6552" path="/var/lib/kubelet/pods/9204c179-3ec8-4aba-87a2-2b95ab4d6552/volumes" Feb 26 09:56:09 crc kubenswrapper[4741]: I0226 09:56:09.448494 4741 scope.go:117] "RemoveContainer" containerID="703a658f682910e12e647a8a2cbad81187870482f274d296f8b4187f68ecd0b8" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.244304 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dmdkd/must-gather-xtkwd"] Feb 26 09:56:17 crc kubenswrapper[4741]: E0226 09:56:17.245681 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e032146-0ca3-4914-9303-16d588739174" containerName="oc" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.245698 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e032146-0ca3-4914-9303-16d588739174" containerName="oc" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.246000 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e032146-0ca3-4914-9303-16d588739174" containerName="oc" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.247497 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.255240 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dmdkd"/"kube-root-ca.crt" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.256769 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dmdkd"/"openshift-service-ca.crt" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.290617 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dmdkd/must-gather-xtkwd"] Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.335508 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bbv\" (UniqueName: \"kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.336211 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.449710 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.450176 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bbv\" (UniqueName: \"kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.451334 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.479476 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bbv\" (UniqueName: \"kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv\") pod \"must-gather-xtkwd\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:17 crc kubenswrapper[4741]: I0226 09:56:17.574130 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 09:56:18 crc kubenswrapper[4741]: I0226 09:56:18.163813 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dmdkd/must-gather-xtkwd"] Feb 26 09:56:18 crc kubenswrapper[4741]: I0226 09:56:18.792203 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" event={"ID":"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d","Type":"ContainerStarted","Data":"6c89d772ac6bbe48bf365b3306ed74a89aede1e7ce58f7100592743f6b524c8c"} Feb 26 09:56:27 crc kubenswrapper[4741]: I0226 09:56:27.928420 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" event={"ID":"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d","Type":"ContainerStarted","Data":"073327ca617160c509dba045a43cfbf6c6462b37e031e81500d588be065ca07e"} Feb 26 09:56:27 crc kubenswrapper[4741]: I0226 09:56:27.929128 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" event={"ID":"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d","Type":"ContainerStarted","Data":"e74ebaabc25295a499b890f4e76d17ed316d3bfb65d9afec1a6be27bc8f61803"} Feb 26 09:56:27 crc kubenswrapper[4741]: I0226 09:56:27.968094 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" podStartSLOduration=2.167081706 podStartE2EDuration="10.968065149s" podCreationTimestamp="2026-02-26 09:56:17 +0000 UTC" firstStartedPulling="2026-02-26 09:56:18.311745834 +0000 UTC m=+6213.307683221" lastFinishedPulling="2026-02-26 09:56:27.112729277 +0000 UTC m=+6222.108666664" observedRunningTime="2026-02-26 09:56:27.948871053 +0000 UTC m=+6222.944808460" watchObservedRunningTime="2026-02-26 09:56:27.968065149 +0000 UTC m=+6222.964002546" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.033753 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-ftb9w"] Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.037036 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.043547 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dmdkd"/"default-dockercfg-w77nj" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.146061 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8bl\" (UniqueName: \"kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.146178 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.248981 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8bl\" (UniqueName: \"kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.249067 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.250208 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.275465 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8bl\" (UniqueName: \"kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl\") pod \"crc-debug-ftb9w\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:34 crc kubenswrapper[4741]: I0226 09:56:34.371339 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:56:35 crc kubenswrapper[4741]: I0226 09:56:35.020215 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" event={"ID":"db42036e-b805-469b-a55f-f81c115f48a2","Type":"ContainerStarted","Data":"63a227884332f17ca881f2b44c02c63eba22b891221af40888bb24d1998cf890"} Feb 26 09:56:47 crc kubenswrapper[4741]: I0226 09:56:47.195240 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" event={"ID":"db42036e-b805-469b-a55f-f81c115f48a2","Type":"ContainerStarted","Data":"77c01d49e02481eacbd05bd15a8302b5a117a30d72b870a9e2fd4c6199855f26"} Feb 26 09:56:47 crc kubenswrapper[4741]: I0226 09:56:47.220711 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" podStartSLOduration=0.980670769 podStartE2EDuration="13.220681841s" podCreationTimestamp="2026-02-26 09:56:34 +0000 UTC" firstStartedPulling="2026-02-26 09:56:34.441288686 +0000 UTC m=+6229.437226073" lastFinishedPulling="2026-02-26 09:56:46.681299758 +0000 UTC m=+6241.677237145" observedRunningTime="2026-02-26 09:56:47.209128312 +0000 UTC m=+6242.205065699" watchObservedRunningTime="2026-02-26 09:56:47.220681841 +0000 UTC m=+6242.216619228" Feb 26 09:57:25 crc kubenswrapper[4741]: I0226 09:57:25.149193 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:57:25 crc kubenswrapper[4741]: I0226 09:57:25.149859 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:57:55 crc kubenswrapper[4741]: I0226 09:57:55.149853 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:57:55 crc kubenswrapper[4741]: I0226 09:57:55.150647 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:57:57 crc kubenswrapper[4741]: I0226 09:57:57.962748 4741 generic.go:334] "Generic (PLEG): container finished" podID="db42036e-b805-469b-a55f-f81c115f48a2" containerID="77c01d49e02481eacbd05bd15a8302b5a117a30d72b870a9e2fd4c6199855f26" exitCode=0 Feb 26 09:57:57 crc kubenswrapper[4741]: I0226 09:57:57.963136 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" event={"ID":"db42036e-b805-469b-a55f-f81c115f48a2","Type":"ContainerDied","Data":"77c01d49e02481eacbd05bd15a8302b5a117a30d72b870a9e2fd4c6199855f26"} Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.137646 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.184476 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-ftb9w"] Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.197637 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-ftb9w"] Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.303824 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host\") pod \"db42036e-b805-469b-a55f-f81c115f48a2\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.303923 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host" (OuterVolumeSpecName: "host") pod "db42036e-b805-469b-a55f-f81c115f48a2" (UID: "db42036e-b805-469b-a55f-f81c115f48a2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.304131 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8bl\" (UniqueName: \"kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl\") pod \"db42036e-b805-469b-a55f-f81c115f48a2\" (UID: \"db42036e-b805-469b-a55f-f81c115f48a2\") " Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.304898 4741 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db42036e-b805-469b-a55f-f81c115f48a2-host\") on node \"crc\" DevicePath \"\"" Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.314651 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl" (OuterVolumeSpecName: "kube-api-access-hn8bl") pod "db42036e-b805-469b-a55f-f81c115f48a2" (UID: "db42036e-b805-469b-a55f-f81c115f48a2"). InnerVolumeSpecName "kube-api-access-hn8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.407613 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8bl\" (UniqueName: \"kubernetes.io/projected/db42036e-b805-469b-a55f-f81c115f48a2-kube-api-access-hn8bl\") on node \"crc\" DevicePath \"\"" Feb 26 09:57:59 crc kubenswrapper[4741]: I0226 09:57:59.801712 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db42036e-b805-469b-a55f-f81c115f48a2" path="/var/lib/kubelet/pods/db42036e-b805-469b-a55f-f81c115f48a2/volumes" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.004692 4741 scope.go:117] "RemoveContainer" containerID="77c01d49e02481eacbd05bd15a8302b5a117a30d72b870a9e2fd4c6199855f26" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.004742 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-ftb9w" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.188347 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534998-jqwwg"] Feb 26 09:58:00 crc kubenswrapper[4741]: E0226 09:58:00.190422 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db42036e-b805-469b-a55f-f81c115f48a2" containerName="container-00" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.190468 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="db42036e-b805-469b-a55f-f81c115f48a2" containerName="container-00" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.194282 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="db42036e-b805-469b-a55f-f81c115f48a2" containerName="container-00" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.196228 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.211952 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534998-jqwwg"] Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.216419 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.216709 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.216995 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.334924 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq8r\" (UniqueName: \"kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r\") pod \"auto-csr-approver-29534998-jqwwg\" (UID: \"a6f86dcc-1861-414d-8554-f65b604a21b0\") " pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.437859 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trq8r\" (UniqueName: \"kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r\") pod \"auto-csr-approver-29534998-jqwwg\" (UID: \"a6f86dcc-1861-414d-8554-f65b604a21b0\") " pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.450546 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-sbm8g"] Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.452368 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.454603 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dmdkd"/"default-dockercfg-w77nj" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.464982 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trq8r\" (UniqueName: \"kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r\") pod \"auto-csr-approver-29534998-jqwwg\" (UID: \"a6f86dcc-1861-414d-8554-f65b604a21b0\") " pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.539469 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.540957 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdh62\" (UniqueName: \"kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.541081 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.644283 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdh62\" (UniqueName: \"kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.644334 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.644529 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.664520 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdh62\" (UniqueName: \"kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62\") pod \"crc-debug-sbm8g\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:00 crc kubenswrapper[4741]: I0226 09:58:00.839329 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:01 crc kubenswrapper[4741]: I0226 09:58:01.027666 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" event={"ID":"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc","Type":"ContainerStarted","Data":"97bd1987689c129d4e4ba09bb50bef0593dab380c4199b459bc1b749fcdb353e"} Feb 26 09:58:01 crc kubenswrapper[4741]: I0226 09:58:01.347046 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534998-jqwwg"] Feb 26 09:58:02 crc kubenswrapper[4741]: I0226 09:58:02.058097 4741 generic.go:334] "Generic (PLEG): container finished" podID="3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" containerID="84034d73d633f131cbb2d30772fe9dcc960fc3625d9931aa3068074c2cf37fdf" exitCode=0 Feb 26 09:58:02 crc kubenswrapper[4741]: I0226 09:58:02.058181 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" event={"ID":"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc","Type":"ContainerDied","Data":"84034d73d633f131cbb2d30772fe9dcc960fc3625d9931aa3068074c2cf37fdf"} Feb 26 09:58:02 crc kubenswrapper[4741]: I0226 09:58:02.060398 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" event={"ID":"a6f86dcc-1861-414d-8554-f65b604a21b0","Type":"ContainerStarted","Data":"ba256a40aa4013f8ee8da729f038332d0d46454d37474a92f668a6d9732d4828"} Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.245527 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.386911 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host\") pod \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.387039 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host" (OuterVolumeSpecName: "host") pod "3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" (UID: "3e72d6b4-b7bc-443a-b2c4-649ae89e15cc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.387718 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdh62\" (UniqueName: \"kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62\") pod \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\" (UID: \"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc\") " Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.390976 4741 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-host\") on node \"crc\" DevicePath \"\"" Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.418150 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62" (OuterVolumeSpecName: "kube-api-access-xdh62") pod "3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" (UID: "3e72d6b4-b7bc-443a-b2c4-649ae89e15cc"). InnerVolumeSpecName "kube-api-access-xdh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:58:03 crc kubenswrapper[4741]: I0226 09:58:03.494068 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdh62\" (UniqueName: \"kubernetes.io/projected/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc-kube-api-access-xdh62\") on node \"crc\" DevicePath \"\"" Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.096512 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" event={"ID":"3e72d6b4-b7bc-443a-b2c4-649ae89e15cc","Type":"ContainerDied","Data":"97bd1987689c129d4e4ba09bb50bef0593dab380c4199b459bc1b749fcdb353e"} Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.096550 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-sbm8g" Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.096595 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bd1987689c129d4e4ba09bb50bef0593dab380c4199b459bc1b749fcdb353e" Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.099601 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" event={"ID":"a6f86dcc-1861-414d-8554-f65b604a21b0","Type":"ContainerStarted","Data":"f4de9a25dc509c99bc8e4a7e3a5d5cf22068bc3c2b5a9b26b009329202a39e2e"} Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.143104 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" podStartSLOduration=2.4849514409999998 podStartE2EDuration="4.143075149s" podCreationTimestamp="2026-02-26 09:58:00 +0000 UTC" firstStartedPulling="2026-02-26 09:58:01.350295875 +0000 UTC m=+6316.346233262" lastFinishedPulling="2026-02-26 09:58:03.008419583 +0000 UTC m=+6318.004356970" observedRunningTime="2026-02-26 09:58:04.11673409 +0000 UTC m=+6319.112671487" watchObservedRunningTime="2026-02-26 09:58:04.143075149 +0000 UTC m=+6319.139012526" Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.734871 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-sbm8g"] Feb 26 09:58:04 crc kubenswrapper[4741]: I0226 09:58:04.748444 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-sbm8g"] Feb 26 09:58:05 crc kubenswrapper[4741]: I0226 09:58:05.810211 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" path="/var/lib/kubelet/pods/3e72d6b4-b7bc-443a-b2c4-649ae89e15cc/volumes" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.124263 4741 generic.go:334] "Generic (PLEG): container finished" podID="a6f86dcc-1861-414d-8554-f65b604a21b0" containerID="f4de9a25dc509c99bc8e4a7e3a5d5cf22068bc3c2b5a9b26b009329202a39e2e" exitCode=0 Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.124317 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" event={"ID":"a6f86dcc-1861-414d-8554-f65b604a21b0","Type":"ContainerDied","Data":"f4de9a25dc509c99bc8e4a7e3a5d5cf22068bc3c2b5a9b26b009329202a39e2e"} Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.310773 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-vf6sq"] Feb 26 09:58:06 crc kubenswrapper[4741]: E0226 09:58:06.311477 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" containerName="container-00" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.311500 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" containerName="container-00" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.311871 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e72d6b4-b7bc-443a-b2c4-649ae89e15cc" containerName="container-00" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.312911 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.315500 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dmdkd"/"default-dockercfg-w77nj" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.375219 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.375722 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6nx\" (UniqueName: \"kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.478841 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.478938 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6nx\" (UniqueName: \"kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.479403 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.503695 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6nx\" (UniqueName: \"kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx\") pod \"crc-debug-vf6sq\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: I0226 09:58:06.636221 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:06 crc kubenswrapper[4741]: W0226 09:58:06.692660 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd2ff0d_8846_4fee_8d1b_547f7bf33410.slice/crio-b1db2289b66109e1cabd37b5d2bde163b13f8b862b9054ea18470f115d149e64 WatchSource:0}: Error finding container b1db2289b66109e1cabd37b5d2bde163b13f8b862b9054ea18470f115d149e64: Status 404 returned error can't find the container with id b1db2289b66109e1cabd37b5d2bde163b13f8b862b9054ea18470f115d149e64 Feb 26 09:58:07 crc kubenswrapper[4741]: I0226 09:58:07.138220 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" event={"ID":"bcd2ff0d-8846-4fee-8d1b-547f7bf33410","Type":"ContainerStarted","Data":"b1db2289b66109e1cabd37b5d2bde163b13f8b862b9054ea18470f115d149e64"} Feb 26 09:58:07 crc kubenswrapper[4741]: I0226 09:58:07.588726 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:07 crc kubenswrapper[4741]: I0226 09:58:07.707824 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trq8r\" (UniqueName: \"kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r\") pod \"a6f86dcc-1861-414d-8554-f65b604a21b0\" (UID: \"a6f86dcc-1861-414d-8554-f65b604a21b0\") " Feb 26 09:58:07 crc kubenswrapper[4741]: I0226 09:58:07.717422 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r" (OuterVolumeSpecName: "kube-api-access-trq8r") pod "a6f86dcc-1861-414d-8554-f65b604a21b0" (UID: "a6f86dcc-1861-414d-8554-f65b604a21b0"). InnerVolumeSpecName "kube-api-access-trq8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:58:07 crc kubenswrapper[4741]: I0226 09:58:07.811627 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trq8r\" (UniqueName: \"kubernetes.io/projected/a6f86dcc-1861-414d-8554-f65b604a21b0-kube-api-access-trq8r\") on node \"crc\" DevicePath \"\"" Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.150217 4741 generic.go:334] "Generic (PLEG): container finished" podID="bcd2ff0d-8846-4fee-8d1b-547f7bf33410" containerID="1cf258f807e980046485ce9c65b12fe5e2153fdc59be5a0d3f443d809117fb0c" exitCode=0 Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.150329 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" event={"ID":"bcd2ff0d-8846-4fee-8d1b-547f7bf33410","Type":"ContainerDied","Data":"1cf258f807e980046485ce9c65b12fe5e2153fdc59be5a0d3f443d809117fb0c"} Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.152200 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" event={"ID":"a6f86dcc-1861-414d-8554-f65b604a21b0","Type":"ContainerDied","Data":"ba256a40aa4013f8ee8da729f038332d0d46454d37474a92f668a6d9732d4828"} Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.152233 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba256a40aa4013f8ee8da729f038332d0d46454d37474a92f668a6d9732d4828" Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.152267 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534998-jqwwg" Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.212161 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-vf6sq"] Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.230060 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dmdkd/crc-debug-vf6sq"] Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.265959 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-mrfb7"] Feb 26 09:58:08 crc kubenswrapper[4741]: I0226 09:58:08.277975 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-mrfb7"] Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.308473 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.451944 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b6nx\" (UniqueName: \"kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx\") pod \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.452445 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host\") pod \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\" (UID: \"bcd2ff0d-8846-4fee-8d1b-547f7bf33410\") " Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.452580 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host" (OuterVolumeSpecName: "host") pod "bcd2ff0d-8846-4fee-8d1b-547f7bf33410" (UID: "bcd2ff0d-8846-4fee-8d1b-547f7bf33410"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.453381 4741 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-host\") on node \"crc\" DevicePath \"\"" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.458378 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx" (OuterVolumeSpecName: "kube-api-access-4b6nx") pod "bcd2ff0d-8846-4fee-8d1b-547f7bf33410" (UID: "bcd2ff0d-8846-4fee-8d1b-547f7bf33410"). InnerVolumeSpecName "kube-api-access-4b6nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.556257 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b6nx\" (UniqueName: \"kubernetes.io/projected/bcd2ff0d-8846-4fee-8d1b-547f7bf33410-kube-api-access-4b6nx\") on node \"crc\" DevicePath \"\"" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.800376 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514e05a2-2165-4f5a-bdff-92c787343941" path="/var/lib/kubelet/pods/514e05a2-2165-4f5a-bdff-92c787343941/volumes" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.801166 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd2ff0d-8846-4fee-8d1b-547f7bf33410" path="/var/lib/kubelet/pods/bcd2ff0d-8846-4fee-8d1b-547f7bf33410/volumes" Feb 26 09:58:09 crc kubenswrapper[4741]: I0226 09:58:09.821278 4741 scope.go:117] "RemoveContainer" containerID="f487180867be0fff1dd6b7d8d1cf8fc9c29dbd1b500628835d9cb814f42d314a" Feb 26 09:58:10 crc kubenswrapper[4741]: I0226 09:58:10.181970 4741 scope.go:117] "RemoveContainer" containerID="1cf258f807e980046485ce9c65b12fe5e2153fdc59be5a0d3f443d809117fb0c" Feb 26 09:58:10 crc kubenswrapper[4741]: I0226 09:58:10.182029 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/crc-debug-vf6sq" Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.149583 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.150095 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.150167 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.151295 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.151361 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6" gracePeriod=600 Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.362022 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6" exitCode=0 Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.362146 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6"} Feb 26 09:58:25 crc kubenswrapper[4741]: I0226 09:58:25.362580 4741 scope.go:117] "RemoveContainer" containerID="5834ca8f806bccad56e079e0c101f7a5a0f8524f742ef252bd1e67a17473dd7e" Feb 26 09:58:26 crc kubenswrapper[4741]: I0226 09:58:26.379679 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576"} Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.152706 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a889afc5-6db0-4421-a04c-4ea08557d068/aodh-api/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.496001 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a889afc5-6db0-4421-a04c-4ea08557d068/aodh-listener/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.511483 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a889afc5-6db0-4421-a04c-4ea08557d068/aodh-evaluator/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.610330 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a889afc5-6db0-4421-a04c-4ea08557d068/aodh-notifier/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.703964 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8557bd7d-cnk2f_5e8d427c-00a8-4c0f-acee-63d42390501d/barbican-api/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.779883 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8557bd7d-cnk2f_5e8d427c-00a8-4c0f-acee-63d42390501d/barbican-api-log/0.log" Feb 26 09:58:44 crc kubenswrapper[4741]: I0226 09:58:44.987653 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c4c8dc778-mb68n_8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b/barbican-keystone-listener/0.log" Feb 26 09:58:45 crc kubenswrapper[4741]: I0226 09:58:45.062885 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c4c8dc778-mb68n_8e44d18d-31fb-4d22-a8dd-c3ddad4ef24b/barbican-keystone-listener-log/0.log" Feb 26 09:58:45 crc kubenswrapper[4741]: I0226 09:58:45.527196 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768d8b48ff-xfq8n_4410803b-98d4-4e00-a854-4427dd5d3ebc/barbican-worker/0.log" Feb 26 09:58:45 crc kubenswrapper[4741]: I0226 09:58:45.580002 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768d8b48ff-xfq8n_4410803b-98d4-4e00-a854-4427dd5d3ebc/barbican-worker-log/0.log" Feb 26 09:58:45 crc kubenswrapper[4741]: I0226 09:58:45.737868 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-65twb_f9b8a965-3073-4b51-8dfc-a1bdf31ab63e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:45 crc kubenswrapper[4741]: I0226 09:58:45.950349 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad35d04e-1800-463f-8059-29fac13e2947/ceilometer-central-agent/1.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.066716 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad35d04e-1800-463f-8059-29fac13e2947/ceilometer-notification-agent/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.089682 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad35d04e-1800-463f-8059-29fac13e2947/ceilometer-central-agent/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.103455 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad35d04e-1800-463f-8059-29fac13e2947/proxy-httpd/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.220457 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad35d04e-1800-463f-8059-29fac13e2947/sg-core/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.406929 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d83c4a22-8843-4882-9c41-0a5c11ba9dff/cinder-api-log/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.479936 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d83c4a22-8843-4882-9c41-0a5c11ba9dff/cinder-api/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.704493 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_035e58f7-7a11-4584-baee-a4036a07b94b/cinder-scheduler/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.743737 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wpwsk_a34fc776-add8-4082-8fa8-041ac3ee8860/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:46 crc kubenswrapper[4741]: I0226 09:58:46.765151 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_035e58f7-7a11-4584-baee-a4036a07b94b/probe/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.012517 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-st96n_cffc65f6-91a1-45b8-b723-ac972e12e9f9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.241212 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-p5dww_6cf8018e-f0d4-483a-8778-c94aafa4971d/init/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.500529 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-p5dww_6cf8018e-f0d4-483a-8778-c94aafa4971d/init/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.523238 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-p5dww_6cf8018e-f0d4-483a-8778-c94aafa4971d/dnsmasq-dns/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.551093 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-s2bcm_7f3e3b88-eb11-45c6-a975-3f8db2941855/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.852818 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e1cfca6c-9dec-48b7-a390-17450189e9bb/glance-log/0.log" Feb 26 09:58:47 crc kubenswrapper[4741]: I0226 09:58:47.884778 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e1cfca6c-9dec-48b7-a390-17450189e9bb/glance-httpd/0.log" Feb 26 09:58:48 crc kubenswrapper[4741]: I0226 09:58:48.223956 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e/glance-log/0.log" Feb 26 09:58:48 crc kubenswrapper[4741]: I0226 09:58:48.652760 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f5f9b8d7-7b3e-4f03-a0fa-757b39a79a0e/glance-httpd/0.log" Feb 26 09:58:48 crc kubenswrapper[4741]: I0226 09:58:48.815541 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-d4c87cddd-2cr8g_a48be3c0-df67-4f76-af9e-d9679ae9da07/heat-api/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.045433 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6859949d69-rth8q_4cac2933-9bb3-4dc7-9c8d-8e738d59c6a9/heat-engine/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.209945 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nwlq9_34e96b16-fd87-4660-bbdb-8e62046ab2ce/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.217581 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-754cb96d-pnhrs_801de166-56f0-4a77-b57b-0be437d80ead/heat-cfnapi/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.372884 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z69z9_78195bed-8ed3-456b-aad9-27eca93ebb64/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.557647 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29534941-wlmm8_78823a9f-87b3-4f75-be1d-943051329769/keystone-cron/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.728861 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8c2cbd6a-faeb-4fca-97e8-8d474ffbbe67/kube-state-metrics/0.log" Feb 26 09:58:49 crc kubenswrapper[4741]: I0226 09:58:49.973662 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6p9q9_93812dcb-b40b-467e-8831-83b017ebd77b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.056482 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cb84cbfff-vmnwr_62a4dea8-4285-4342-9c08-a97916f65b3d/keystone-api/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.287981 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-njkxx_8e4fadac-0554-4e65-a18c-b96b1bf9cb1a/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.398329 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_5c1caae2-0233-4346-afc0-4729c5e567b0/mysqld-exporter/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.838694 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-grxzn_34921c87-3a4a-4be3-8a8e-8cae7baf4785/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.907090 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-d6b948c9c-pm7qf_ffb31ef3-acf3-4fc6-83a4-2a898da5dffd/neutron-httpd/0.log" Feb 26 09:58:50 crc kubenswrapper[4741]: I0226 09:58:50.947015 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-d6b948c9c-pm7qf_ffb31ef3-acf3-4fc6-83a4-2a898da5dffd/neutron-api/0.log" Feb 26 09:58:51 crc kubenswrapper[4741]: I0226 09:58:51.565708 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1d98d521-9e77-4bb8-9b2f-33f59b8a6b5b/nova-cell0-conductor-conductor/0.log" Feb 26 09:58:51 crc kubenswrapper[4741]: I0226 09:58:51.894580 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3becc56-1879-4497-8208-fb2c62a6f0e4/nova-api-log/0.log" Feb 26 09:58:51 crc kubenswrapper[4741]: I0226 09:58:51.962711 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bb22d1d6-9459-4a95-b70e-38c325c092bd/nova-cell1-conductor-conductor/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.171661 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3becc56-1879-4497-8208-fb2c62a6f0e4/nova-api-api/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.277392 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_078be97e-5b33-4a37-9c43-ffb13c9144e7/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.288638 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ch4rj_ebe89e06-bf26-474e-8caf-f29a10b0fb24/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.534987 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_495ff256-6ba5-4e6c-b97c-c3a8c15a595b/nova-metadata-log/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.914469 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b1496b8-9f14-472d-af02-7357f75ba7cf/mysql-bootstrap/0.log" Feb 26 09:58:52 crc kubenswrapper[4741]: I0226 09:58:52.962660 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ce926f0a-c4d5-4c36-852e-e8c6bc44394e/nova-scheduler-scheduler/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.077293 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b1496b8-9f14-472d-af02-7357f75ba7cf/mysql-bootstrap/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.181770 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b1496b8-9f14-472d-af02-7357f75ba7cf/galera/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.367848 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ed8ae863-261b-4cbd-945a-b79c99fa0a9f/mysql-bootstrap/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.619172 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ed8ae863-261b-4cbd-945a-b79c99fa0a9f/galera/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.675139 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ed8ae863-261b-4cbd-945a-b79c99fa0a9f/mysql-bootstrap/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.830844 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9884c9db-d963-4349-8cbb-a4a72a81d8cc/openstackclient/0.log" Feb 26 09:58:53 crc kubenswrapper[4741]: I0226 09:58:53.943424 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bkphx_64c14e41-587f-4290-9133-6a3f89e43d86/openstack-network-exporter/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.190204 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h2ft9_4f9888af-7f4e-4ed5-afb4-b13215010297/ovsdb-server-init/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.432915 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h2ft9_4f9888af-7f4e-4ed5-afb4-b13215010297/ovsdb-server-init/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.437820 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h2ft9_4f9888af-7f4e-4ed5-afb4-b13215010297/ovs-vswitchd/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.472103 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h2ft9_4f9888af-7f4e-4ed5-afb4-b13215010297/ovsdb-server/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.653067 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-stlzj_6a8ae1f8-db05-4bc6-a470-60c58ec57f8c/ovn-controller/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.917582 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nnjmx_d8d4bb96-ef81-4ac1-af2c-e8f63a53b830/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:54 crc kubenswrapper[4741]: I0226 09:58:54.958472 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_495ff256-6ba5-4e6c-b97c-c3a8c15a595b/nova-metadata-metadata/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.052547 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_10d582b9-9e4a-4ce4-8763-addb194c9ced/openstack-network-exporter/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.291206 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_10d582b9-9e4a-4ce4-8763-addb194c9ced/ovn-northd/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.314839 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85476d1c-5870-4efd-ae6f-ef9a09d9d888/openstack-network-exporter/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.316674 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85476d1c-5870-4efd-ae6f-ef9a09d9d888/ovsdbserver-nb/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.505293 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d649a1f-19db-4b0d-8162-aec7e405ccb4/openstack-network-exporter/0.log" Feb 26 09:58:55 crc kubenswrapper[4741]: I0226 09:58:55.564281 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d649a1f-19db-4b0d-8162-aec7e405ccb4/ovsdbserver-sb/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.000764 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7566dbcd8b-6qsnk_f5c8055f-a0fc-411f-9379-7079ee6d51b4/placement-api/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.019866 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7566dbcd8b-6qsnk_f5c8055f-a0fc-411f-9379-7079ee6d51b4/placement-log/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.058121 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_24120f9b-9d9b-4783-9dd9-2450215d3d26/init-config-reloader/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.273569 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_24120f9b-9d9b-4783-9dd9-2450215d3d26/init-config-reloader/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.308208 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_24120f9b-9d9b-4783-9dd9-2450215d3d26/thanos-sidecar/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.323071 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_24120f9b-9d9b-4783-9dd9-2450215d3d26/config-reloader/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.383720 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_24120f9b-9d9b-4783-9dd9-2450215d3d26/prometheus/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.546730 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acd31381-59b4-426e-94f1-57ac13548b26/setup-container/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.767051 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acd31381-59b4-426e-94f1-57ac13548b26/setup-container/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.913838 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acd31381-59b4-426e-94f1-57ac13548b26/rabbitmq/0.log" Feb 26 09:58:56 crc kubenswrapper[4741]: I0226 09:58:56.966318 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa1ea6e3-fc0a-4e77-b384-1e8629a4707f/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.166590 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_9f6d22be-0e7b-46b9-beff-4dacd2f8ee69/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.274727 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa1ea6e3-fc0a-4e77-b384-1e8629a4707f/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.331579 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fa1ea6e3-fc0a-4e77-b384-1e8629a4707f/rabbitmq/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.500518 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_9f6d22be-0e7b-46b9-beff-4dacd2f8ee69/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.521649 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_9f6d22be-0e7b-46b9-beff-4dacd2f8ee69/rabbitmq/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.635769 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_9f58f56d-176d-4468-ae5a-31e1e7fb48a1/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.893533 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_9f58f56d-176d-4468-ae5a-31e1e7fb48a1/rabbitmq/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.903859 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_9f58f56d-176d-4468-ae5a-31e1e7fb48a1/setup-container/0.log" Feb 26 09:58:57 crc kubenswrapper[4741]: I0226 09:58:57.942156 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-t6pqx_815f01fb-9d09-4745-836f-e4fd93594bb3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:58 crc kubenswrapper[4741]: I0226 09:58:58.165536 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-jl69s_c4e66e5b-9130-43ea-b7e9-cd8994a6f3b9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:58 crc kubenswrapper[4741]: I0226 09:58:58.320983 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rxfcz_e73f4159-15a0-40ca-b09a-903cb04c34d9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:58 crc kubenswrapper[4741]: I0226 09:58:58.466139 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-drt28_bf9f78a5-a58a-4402-a137-7b3ee3bf5d1a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:58:58 crc kubenswrapper[4741]: I0226 09:58:58.974556 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-srzsc_3830734c-a696-440b-80fc-b2b3e1d29cf4/ssh-known-hosts-edpm-deployment/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.241388 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5684558457-bgfq2_36ad4e3b-dd1d-40e9-9051-203753e6be0b/proxy-server/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.266612 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5684558457-bgfq2_36ad4e3b-dd1d-40e9-9051-203753e6be0b/proxy-httpd/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.376810 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-xwx84_548f1177-df4c-4b50-920f-f5b9ff95c283/swift-ring-rebalance/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.526755 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/account-reaper/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.527293 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/account-auditor/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.687481 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/account-replicator/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.790083 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/container-auditor/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.879780 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/account-server/0.log" Feb 26 09:58:59 crc kubenswrapper[4741]: I0226 09:58:59.929536 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/container-replicator/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.023203 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/container-server/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.050151 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/container-updater/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.198998 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/object-expirer/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.223249 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/object-auditor/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.261877 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/object-server/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.325961 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/object-replicator/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.438838 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/object-updater/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.488592 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/rsync/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.590683 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91b0231b-fbdf-4714-ac14-d3621c8c7807/swift-recon-cron/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.756535 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-fhgs7_a3cde25c-5220-45d5-8f47-db09f2db34e8/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:59:00 crc kubenswrapper[4741]: I0226 09:59:00.890041 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-k59jv_7b5e677e-1d6b-4c7f-925e-ac5f65ced91a/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:59:01 crc kubenswrapper[4741]: I0226 09:59:01.112750 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7e72cb7c-ec1b-463c-96ee-7d6e64f805b1/test-operator-logs-container/0.log" Feb 26 09:59:01 crc kubenswrapper[4741]: I0226 09:59:01.336231 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m9jj7_1d5750c0-9314-4e3c-9711-c4d11fba6b84/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 09:59:01 crc kubenswrapper[4741]: I0226 09:59:01.919202 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5b00e4ed-4b6a-4871-b454-dec4229deb64/tempest-tests-tempest-tests-runner/0.log" Feb 26 09:59:09 crc kubenswrapper[4741]: I0226 09:59:09.080030 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d7ee427f-ada6-4496-a314-c5cd63abefcd/memcached/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.242850 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/util/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.508101 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/util/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.534691 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/pull/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.559192 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/pull/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.838839 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/extract/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.876612 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/util/0.log" Feb 26 09:59:33 crc kubenswrapper[4741]: I0226 09:59:33.887597 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b48c24943f9665c39df803c6f2e2891bf58f37f7200073a14493cef0e9ppx5f_9e45a379-1f39-4497-a1c9-cde834f3dfcc/pull/0.log" Feb 26 09:59:35 crc kubenswrapper[4741]: I0226 09:59:34.463928 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-6bfw4_7d9bffe2-0600-47fe-83e6-847d6943a748/manager/0.log" Feb 26 09:59:35 crc kubenswrapper[4741]: I0226 09:59:35.163137 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-wdfht_6e5158cf-c5d8-46e4-b433-20c6a410bf5e/manager/0.log" Feb 26 09:59:35 crc kubenswrapper[4741]: I0226 09:59:35.745253 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-9b4f4_e3fc347b-349b-4811-8f1e-0281658e669a/manager/0.log" Feb 26 09:59:35 crc kubenswrapper[4741]: I0226 09:59:35.757373 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-mkmsh_aafef34e-4723-41d4-a28e-634f4ba80bea/manager/0.log" Feb 26 09:59:36 crc kubenswrapper[4741]: I0226 09:59:36.726599 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podUID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:59:36 crc kubenswrapper[4741]: I0226 09:59:36.727647 4741 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qmzqh" podUID="3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:59:36 crc kubenswrapper[4741]: I0226 09:59:36.899054 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-2lglc_8520f5ec-d0e0-4bc0-a10b-dfb5157c5924/manager/0.log" Feb 26 09:59:37 crc kubenswrapper[4741]: I0226 09:59:37.240603 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5tj5s_76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe/manager/1.log" Feb 26 09:59:38 crc kubenswrapper[4741]: I0226 09:59:38.247637 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5tj5s_76f0ad4f-b824-4eb8-8b1e-ea9501c00fbe/manager/0.log" Feb 26 09:59:38 crc kubenswrapper[4741]: I0226 09:59:38.691242 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-s78b5_b2c3a19d-a170-476f-a589-e7cde492ac1d/manager/0.log" Feb 26 09:59:38 crc kubenswrapper[4741]: I0226 09:59:38.984794 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-d672t_e97b1690-b880-4c0d-9e36-484d2abf0e8e/manager/0.log" Feb 26 09:59:39 crc kubenswrapper[4741]: I0226 09:59:39.037967 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-b4tjj_ee5a7051-54e5-4fd9-97b1-1cdcf2ed4fa6/manager/0.log" Feb 26 09:59:39 crc kubenswrapper[4741]: I0226 09:59:39.372032 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-d7flk_0d69cf5a-6ccc-4c66-a767-fd837ea440a3/manager/0.log" Feb 26 09:59:39 crc kubenswrapper[4741]: I0226 09:59:39.746454 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-tc4z9_3ac2b7cc-5f85-4ba3-8ccb-cca2152ffffb/manager/0.log" Feb 26 09:59:39 crc kubenswrapper[4741]: I0226 09:59:39.852933 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-k2c7v_c40047b0-d115-4a5f-aa50-d888eafff094/manager/0.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.051243 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-z8h9r_6980cc82-375e-4057-8dd6-1518d19891ed/manager/1.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.070268 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-z8h9r_6980cc82-375e-4057-8dd6-1518d19891ed/manager/0.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.260391 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z_80b43fed-c72c-4b2b-8d4d-0a0b9044d61f/manager/1.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.409102 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cbrb2z_80b43fed-c72c-4b2b-8d4d-0a0b9044d61f/manager/0.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.941352 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-76fc895699-z8llq_4b189628-5343-4512-bf5d-1daf4abf4079/operator/0.log" Feb 26 09:59:40 crc kubenswrapper[4741]: I0226 09:59:40.977232 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-thrfc_2fcbc58b-4880-4c34-8d80-16c8be56db58/registry-server/0.log" Feb 26 09:59:41 crc kubenswrapper[4741]: I0226 09:59:41.386967 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-zhvxr_10293970-cf7e-4d61-9522-0bbfaa7a872f/manager/0.log" Feb 26 09:59:41 crc kubenswrapper[4741]: I0226 09:59:41.524547 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-x77f8_c9c57ac4-4382-4a2a-b0c7-8985f71ea615/manager/0.log" Feb 26 09:59:41 crc kubenswrapper[4741]: I0226 09:59:41.609807 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9rlrx_6c09faf7-6a12-4474-8251-2aa222e9c596/operator/1.log" Feb 26 09:59:41 crc kubenswrapper[4741]: I0226 09:59:41.827293 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9rlrx_6c09faf7-6a12-4474-8251-2aa222e9c596/operator/0.log" Feb 26 09:59:41 crc kubenswrapper[4741]: I0226 09:59:41.980425 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-7c7nz_dbdb4143-6ca6-4468-ae59-db0a15ae9229/manager/0.log" Feb 26 09:59:42 crc kubenswrapper[4741]: I0226 09:59:42.369936 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-zf778_e569c05c-2b4a-448e-8393-65650cdc0d4a/manager/1.log" Feb 26 09:59:42 crc kubenswrapper[4741]: I0226 09:59:42.565892 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-zf778_e569c05c-2b4a-448e-8393-65650cdc0d4a/manager/0.log" Feb 26 09:59:42 crc kubenswrapper[4741]: I0226 09:59:42.954090 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-qmzqh_3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed/manager/1.log" Feb 26 09:59:42 crc kubenswrapper[4741]: I0226 09:59:42.984888 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5854c6b474-xr2dz_001f4723-6a83-41ae-ac81-fc17c370a90e/manager/0.log" Feb 26 09:59:42 crc kubenswrapper[4741]: I0226 09:59:42.990229 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-qmzqh_3bc858b8-4024-4ea9-8a4f-8a24a90dd5ed/manager/0.log" Feb 26 09:59:43 crc kubenswrapper[4741]: I0226 09:59:43.598374 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79d8d89fdf-5jkv5_e374c69c-1959-44c3-839c-2b5897259440/manager/0.log" Feb 26 09:59:49 crc kubenswrapper[4741]: I0226 09:59:49.252556 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-rt588_f4754cdd-d402-4c7e-a0cf-a39549369eb8/manager/0.log" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.721968 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 09:59:55 crc kubenswrapper[4741]: E0226 09:59:55.723323 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd2ff0d-8846-4fee-8d1b-547f7bf33410" containerName="container-00" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.723345 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd2ff0d-8846-4fee-8d1b-547f7bf33410" containerName="container-00" Feb 26 09:59:55 crc kubenswrapper[4741]: E0226 09:59:55.723447 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f86dcc-1861-414d-8554-f65b604a21b0" containerName="oc" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.723459 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f86dcc-1861-414d-8554-f65b604a21b0" containerName="oc" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.723913 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd2ff0d-8846-4fee-8d1b-547f7bf33410" containerName="container-00" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.723942 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f86dcc-1861-414d-8554-f65b604a21b0" containerName="oc" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.747482 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.839363 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.850965 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.851101 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wrd\" (UniqueName: \"kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.851596 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.954214 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52wrd\" (UniqueName: \"kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.954362 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.954817 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.954939 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:55 crc kubenswrapper[4741]: I0226 09:59:55.955227 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:56 crc kubenswrapper[4741]: I0226 09:59:56.027378 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wrd\" (UniqueName: \"kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd\") pod \"certified-operators-ljs62\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:56 crc kubenswrapper[4741]: I0226 09:59:56.083400 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.226722 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.369313 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.377838 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.388343 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.392458 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerStarted","Data":"740251907aa5a473330768888bdd6e9b01fd67f649d0ede719cb56630a562fae"} Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.404014 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.404178 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gs2\" (UniqueName: \"kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.404225 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.506649 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gs2\" (UniqueName: \"kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.507057 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.507279 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.508042 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.510030 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.536512 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gs2\" (UniqueName: \"kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2\") pod \"redhat-marketplace-mdlqk\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:57 crc kubenswrapper[4741]: I0226 09:59:57.732181 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 09:59:58 crc kubenswrapper[4741]: I0226 09:59:58.416850 4741 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerID="1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8" exitCode=0 Feb 26 09:59:58 crc kubenswrapper[4741]: I0226 09:59:58.417189 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerDied","Data":"1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8"} Feb 26 09:59:58 crc kubenswrapper[4741]: I0226 09:59:58.490443 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 09:59:59 crc kubenswrapper[4741]: I0226 09:59:59.482208 4741 generic.go:334] "Generic (PLEG): container finished" podID="55725aa8-b544-4c46-863a-87d0a10990a4" containerID="c67646fe61a39bf4d7c439d1b78e4d05498f90480c8f80f448967061e6eda35d" exitCode=0 Feb 26 09:59:59 crc kubenswrapper[4741]: I0226 09:59:59.482296 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerDied","Data":"c67646fe61a39bf4d7c439d1b78e4d05498f90480c8f80f448967061e6eda35d"} Feb 26 09:59:59 crc kubenswrapper[4741]: I0226 09:59:59.482743 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerStarted","Data":"7e08ed082474b2bb5831958bb3b4c77e0749a8f5a9d855025fb2597693a51f74"} Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.162547 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535000-47jz6"] Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.165069 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.167562 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.167592 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.167850 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.300772 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjkt\" (UniqueName: \"kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt\") pod \"auto-csr-approver-29535000-47jz6\" (UID: \"09171698-0f90-4ea8-ae5f-68ae73081d30\") " pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.404360 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjkt\" (UniqueName: \"kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt\") pod \"auto-csr-approver-29535000-47jz6\" (UID: \"09171698-0f90-4ea8-ae5f-68ae73081d30\") " pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.427089 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjkt\" (UniqueName: \"kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt\") pod \"auto-csr-approver-29535000-47jz6\" (UID: \"09171698-0f90-4ea8-ae5f-68ae73081d30\") " pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.486538 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.773932 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerStarted","Data":"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af"} Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.810531 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535000-47jz6"] Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.830209 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn"] Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.832522 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.836545 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.836958 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 10:00:00 crc kubenswrapper[4741]: I0226 10:00:00.868624 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn"] Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.028999 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.029061 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.029514 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbvf\" (UniqueName: \"kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.133970 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbvf\" (UniqueName: \"kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.134754 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.134801 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.136488 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.141457 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.166752 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbvf\" (UniqueName: \"kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf\") pod \"collect-profiles-29535000-jg9mn\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.288043 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.511972 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535000-47jz6"] Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.823397 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerStarted","Data":"e340dc39e53e161423048a9951433c9b3843f69da0a0f233d66effce4d5b4638"} Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.831148 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535000-47jz6" event={"ID":"09171698-0f90-4ea8-ae5f-68ae73081d30","Type":"ContainerStarted","Data":"0237b47e819d1b03f136cf1fe2669bbbf47146dbd7642ca65a7c11ca60a7e040"} Feb 26 10:00:01 crc kubenswrapper[4741]: W0226 10:00:01.991221 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbc2487b_3f1b_4afe_8535_d8ef75d4cc2f.slice/crio-e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6 WatchSource:0}: Error finding container e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6: Status 404 returned error can't find the container with id e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6 Feb 26 10:00:01 crc kubenswrapper[4741]: I0226 10:00:01.992195 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn"] Feb 26 10:00:02 crc kubenswrapper[4741]: I0226 10:00:02.874539 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" event={"ID":"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f","Type":"ContainerStarted","Data":"f42c15c067f388de0a18fcb388d79aa0e96d4d00693b72b8584f4bce704fe058"} Feb 26 10:00:02 crc kubenswrapper[4741]: I0226 10:00:02.874911 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" event={"ID":"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f","Type":"ContainerStarted","Data":"e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6"} Feb 26 10:00:02 crc kubenswrapper[4741]: I0226 10:00:02.902349 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" podStartSLOduration=2.902323934 podStartE2EDuration="2.902323934s" podCreationTimestamp="2026-02-26 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 10:00:02.890645992 +0000 UTC m=+6437.886583399" watchObservedRunningTime="2026-02-26 10:00:02.902323934 +0000 UTC m=+6437.898261321" Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.887584 4741 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerID="f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af" exitCode=0 Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.887921 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerDied","Data":"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af"} Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.891921 4741 generic.go:334] "Generic (PLEG): container finished" podID="fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" containerID="f42c15c067f388de0a18fcb388d79aa0e96d4d00693b72b8584f4bce704fe058" exitCode=0 Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.892132 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" event={"ID":"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f","Type":"ContainerDied","Data":"f42c15c067f388de0a18fcb388d79aa0e96d4d00693b72b8584f4bce704fe058"} Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.898566 4741 generic.go:334] "Generic (PLEG): container finished" podID="55725aa8-b544-4c46-863a-87d0a10990a4" containerID="e340dc39e53e161423048a9951433c9b3843f69da0a0f233d66effce4d5b4638" exitCode=0 Feb 26 10:00:03 crc kubenswrapper[4741]: I0226 10:00:03.898668 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerDied","Data":"e340dc39e53e161423048a9951433c9b3843f69da0a0f233d66effce4d5b4638"} Feb 26 10:00:05 crc kubenswrapper[4741]: I0226 10:00:05.929875 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" event={"ID":"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f","Type":"ContainerDied","Data":"e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6"} Feb 26 10:00:05 crc kubenswrapper[4741]: I0226 10:00:05.930600 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e884ebb556206fbc112ae7ef4bb4f55dc86ab1a0f9228063c608f42a4ca136b6" Feb 26 10:00:05 crc kubenswrapper[4741]: I0226 10:00:05.934076 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerStarted","Data":"49635b5db4076b897df618d1924c02fbc2e0ec60d2b24f2ee9049982b579712c"} Feb 26 10:00:05 crc kubenswrapper[4741]: I0226 10:00:05.980079 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mdlqk" podStartSLOduration=3.585960183 podStartE2EDuration="8.980049203s" podCreationTimestamp="2026-02-26 09:59:57 +0000 UTC" firstStartedPulling="2026-02-26 09:59:59.485551031 +0000 UTC m=+6434.481488418" lastFinishedPulling="2026-02-26 10:00:04.879640051 +0000 UTC m=+6439.875577438" observedRunningTime="2026-02-26 10:00:05.970866422 +0000 UTC m=+6440.966803809" watchObservedRunningTime="2026-02-26 10:00:05.980049203 +0000 UTC m=+6440.975986590" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.048412 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.232650 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume\") pod \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.233050 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldbvf\" (UniqueName: \"kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf\") pod \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.233371 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume\") pod \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\" (UID: \"fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f\") " Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.235161 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" (UID: "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.247615 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf" (OuterVolumeSpecName: "kube-api-access-ldbvf") pod "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" (UID: "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f"). InnerVolumeSpecName "kube-api-access-ldbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.252698 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" (UID: "fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.337081 4741 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.337326 4741 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.337382 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldbvf\" (UniqueName: \"kubernetes.io/projected/fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f-kube-api-access-ldbvf\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.949993 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerStarted","Data":"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb"} Feb 26 10:00:06 crc kubenswrapper[4741]: I0226 10:00:06.950025 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535000-jg9mn" Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.008820 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ljs62" podStartSLOduration=4.796704862 podStartE2EDuration="12.008788297s" podCreationTimestamp="2026-02-26 09:59:55 +0000 UTC" firstStartedPulling="2026-02-26 09:59:58.423045566 +0000 UTC m=+6433.418982953" lastFinishedPulling="2026-02-26 10:00:05.635128991 +0000 UTC m=+6440.631066388" observedRunningTime="2026-02-26 10:00:06.994770108 +0000 UTC m=+6441.990707515" watchObservedRunningTime="2026-02-26 10:00:07.008788297 +0000 UTC m=+6442.004725684" Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.163812 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g"] Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.184267 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534955-pqn9g"] Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.736462 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.736527 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:07 crc kubenswrapper[4741]: I0226 10:00:07.803136 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c025a68-cfff-44a8-a399-e667f3e9dd80" path="/var/lib/kubelet/pods/0c025a68-cfff-44a8-a399-e667f3e9dd80/volumes" Feb 26 10:00:09 crc kubenswrapper[4741]: I0226 10:00:09.255273 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mdlqk" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:09 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:09 crc kubenswrapper[4741]: > Feb 26 10:00:10 crc kubenswrapper[4741]: I0226 10:00:10.710187 4741 scope.go:117] "RemoveContainer" containerID="d460837293d517675c1be9db8cec3745e9c4fbba89f4642462b923069cc05b82" Feb 26 10:00:11 crc kubenswrapper[4741]: I0226 10:00:11.006009 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535000-47jz6" event={"ID":"09171698-0f90-4ea8-ae5f-68ae73081d30","Type":"ContainerStarted","Data":"e7a1dfd160d6d8234339b75f41d9b75ffff316c9f2832b4c7558526441390acf"} Feb 26 10:00:14 crc kubenswrapper[4741]: I0226 10:00:14.149974 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hgdrd_9eef37bd-7913-4e1c-baf0-775f14f6e18a/control-plane-machine-set-operator/0.log" Feb 26 10:00:14 crc kubenswrapper[4741]: I0226 10:00:14.530991 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-llf79_79be0654-3564-4cd1-87f7-e9eb1c972bbd/kube-rbac-proxy/0.log" Feb 26 10:00:14 crc kubenswrapper[4741]: I0226 10:00:14.592638 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-llf79_79be0654-3564-4cd1-87f7-e9eb1c972bbd/machine-api-operator/0.log" Feb 26 10:00:16 crc kubenswrapper[4741]: I0226 10:00:16.063333 4741 generic.go:334] "Generic (PLEG): container finished" podID="09171698-0f90-4ea8-ae5f-68ae73081d30" containerID="e7a1dfd160d6d8234339b75f41d9b75ffff316c9f2832b4c7558526441390acf" exitCode=0 Feb 26 10:00:16 crc kubenswrapper[4741]: I0226 10:00:16.063445 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535000-47jz6" event={"ID":"09171698-0f90-4ea8-ae5f-68ae73081d30","Type":"ContainerDied","Data":"e7a1dfd160d6d8234339b75f41d9b75ffff316c9f2832b4c7558526441390acf"} Feb 26 10:00:16 crc kubenswrapper[4741]: I0226 10:00:16.084375 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:00:16 crc kubenswrapper[4741]: I0226 10:00:16.086246 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:00:17 crc kubenswrapper[4741]: I0226 10:00:17.139831 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:17 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:17 crc kubenswrapper[4741]: > Feb 26 10:00:17 crc kubenswrapper[4741]: I0226 10:00:17.809039 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:17 crc kubenswrapper[4741]: I0226 10:00:17.897301 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbjkt\" (UniqueName: \"kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt\") pod \"09171698-0f90-4ea8-ae5f-68ae73081d30\" (UID: \"09171698-0f90-4ea8-ae5f-68ae73081d30\") " Feb 26 10:00:17 crc kubenswrapper[4741]: I0226 10:00:17.940128 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt" (OuterVolumeSpecName: "kube-api-access-hbjkt") pod "09171698-0f90-4ea8-ae5f-68ae73081d30" (UID: "09171698-0f90-4ea8-ae5f-68ae73081d30"). InnerVolumeSpecName "kube-api-access-hbjkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.001004 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbjkt\" (UniqueName: \"kubernetes.io/projected/09171698-0f90-4ea8-ae5f-68ae73081d30-kube-api-access-hbjkt\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.089052 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535000-47jz6" event={"ID":"09171698-0f90-4ea8-ae5f-68ae73081d30","Type":"ContainerDied","Data":"0237b47e819d1b03f136cf1fe2669bbbf47146dbd7642ca65a7c11ca60a7e040"} Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.089141 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0237b47e819d1b03f136cf1fe2669bbbf47146dbd7642ca65a7c11ca60a7e040" Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.089139 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535000-47jz6" Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.146303 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-5cvc9"] Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.161424 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-5cvc9"] Feb 26 10:00:18 crc kubenswrapper[4741]: I0226 10:00:18.790974 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mdlqk" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:18 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:18 crc kubenswrapper[4741]: > Feb 26 10:00:19 crc kubenswrapper[4741]: I0226 10:00:19.818382 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a463cc-32ff-4172-90eb-6aba60242097" path="/var/lib/kubelet/pods/37a463cc-32ff-4172-90eb-6aba60242097/volumes" Feb 26 10:00:25 crc kubenswrapper[4741]: I0226 10:00:25.149414 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:00:25 crc kubenswrapper[4741]: I0226 10:00:25.150405 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:00:27 crc kubenswrapper[4741]: I0226 10:00:27.145410 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:27 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:27 crc kubenswrapper[4741]: > Feb 26 10:00:28 crc kubenswrapper[4741]: I0226 10:00:28.791625 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mdlqk" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:28 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:28 crc kubenswrapper[4741]: > Feb 26 10:00:31 crc kubenswrapper[4741]: I0226 10:00:31.683788 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-crzq9_87889307-9479-43d6-b134-f92d0b413d14/cert-manager-controller/0.log" Feb 26 10:00:31 crc kubenswrapper[4741]: I0226 10:00:31.909875 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-k969n_b979c6a5-dfb5-43c3-8787-0d4e96bebd64/cert-manager-webhook/0.log" Feb 26 10:00:31 crc kubenswrapper[4741]: I0226 10:00:31.945978 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-4zq2p_871ddbfc-c6f2-4eb2-ad70-053df3cdb01b/cert-manager-cainjector/0.log" Feb 26 10:00:37 crc kubenswrapper[4741]: I0226 10:00:37.433489 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:37 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:37 crc kubenswrapper[4741]: > Feb 26 10:00:37 crc kubenswrapper[4741]: I0226 10:00:37.806356 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:37 crc kubenswrapper[4741]: I0226 10:00:37.866582 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:38 crc kubenswrapper[4741]: I0226 10:00:38.069051 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 10:00:39 crc kubenswrapper[4741]: I0226 10:00:39.334537 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mdlqk" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" containerID="cri-o://49635b5db4076b897df618d1924c02fbc2e0ec60d2b24f2ee9049982b579712c" gracePeriod=2 Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.368952 4741 generic.go:334] "Generic (PLEG): container finished" podID="55725aa8-b544-4c46-863a-87d0a10990a4" containerID="49635b5db4076b897df618d1924c02fbc2e0ec60d2b24f2ee9049982b579712c" exitCode=0 Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.369067 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerDied","Data":"49635b5db4076b897df618d1924c02fbc2e0ec60d2b24f2ee9049982b579712c"} Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.804741 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.897399 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities\") pod \"55725aa8-b544-4c46-863a-87d0a10990a4\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.897587 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gs2\" (UniqueName: \"kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2\") pod \"55725aa8-b544-4c46-863a-87d0a10990a4\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.898045 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content\") pod \"55725aa8-b544-4c46-863a-87d0a10990a4\" (UID: \"55725aa8-b544-4c46-863a-87d0a10990a4\") " Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.898326 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities" (OuterVolumeSpecName: "utilities") pod "55725aa8-b544-4c46-863a-87d0a10990a4" (UID: "55725aa8-b544-4c46-863a-87d0a10990a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.899393 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.928816 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2" (OuterVolumeSpecName: "kube-api-access-v6gs2") pod "55725aa8-b544-4c46-863a-87d0a10990a4" (UID: "55725aa8-b544-4c46-863a-87d0a10990a4"). InnerVolumeSpecName "kube-api-access-v6gs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:00:40 crc kubenswrapper[4741]: I0226 10:00:40.930721 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55725aa8-b544-4c46-863a-87d0a10990a4" (UID: "55725aa8-b544-4c46-863a-87d0a10990a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.002381 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gs2\" (UniqueName: \"kubernetes.io/projected/55725aa8-b544-4c46-863a-87d0a10990a4-kube-api-access-v6gs2\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.002699 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55725aa8-b544-4c46-863a-87d0a10990a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.388041 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdlqk" event={"ID":"55725aa8-b544-4c46-863a-87d0a10990a4","Type":"ContainerDied","Data":"7e08ed082474b2bb5831958bb3b4c77e0749a8f5a9d855025fb2597693a51f74"} Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.388148 4741 scope.go:117] "RemoveContainer" containerID="49635b5db4076b897df618d1924c02fbc2e0ec60d2b24f2ee9049982b579712c" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.388391 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdlqk" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.454187 4741 scope.go:117] "RemoveContainer" containerID="e340dc39e53e161423048a9951433c9b3843f69da0a0f233d66effce4d5b4638" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.456610 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.473971 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdlqk"] Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.814569 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" path="/var/lib/kubelet/pods/55725aa8-b544-4c46-863a-87d0a10990a4/volumes" Feb 26 10:00:41 crc kubenswrapper[4741]: I0226 10:00:41.819379 4741 scope.go:117] "RemoveContainer" containerID="c67646fe61a39bf4d7c439d1b78e4d05498f90480c8f80f448967061e6eda35d" Feb 26 10:00:47 crc kubenswrapper[4741]: I0226 10:00:47.134681 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:47 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:47 crc kubenswrapper[4741]: > Feb 26 10:00:53 crc kubenswrapper[4741]: I0226 10:00:53.720887 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-lbpr6_1b75dec6-9aac-4c45-aab5-5a08eed4baa5/nmstate-console-plugin/0.log" Feb 26 10:00:53 crc kubenswrapper[4741]: I0226 10:00:53.996692 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lc9nj_c06d2b98-49e9-4e2b-9b13-498c00d387a8/nmstate-handler/0.log" Feb 26 10:00:54 crc kubenswrapper[4741]: I0226 10:00:54.067057 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-fl4pn_241e6742-0057-4919-94ff-1653ba2ebeba/kube-rbac-proxy/0.log" Feb 26 10:00:54 crc kubenswrapper[4741]: I0226 10:00:54.191343 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-srzj4_b0db9695-b9e5-440a-a1ad-aca0d5386fc6/nmstate-operator/0.log" Feb 26 10:00:54 crc kubenswrapper[4741]: I0226 10:00:54.195803 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-fl4pn_241e6742-0057-4919-94ff-1653ba2ebeba/nmstate-metrics/0.log" Feb 26 10:00:54 crc kubenswrapper[4741]: I0226 10:00:54.417428 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-8lpzw_dfce00da-1ed0-4246-af75-e66c5aa1bd39/nmstate-webhook/0.log" Feb 26 10:00:55 crc kubenswrapper[4741]: I0226 10:00:55.149601 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:00:55 crc kubenswrapper[4741]: I0226 10:00:55.149682 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:00:57 crc kubenswrapper[4741]: I0226 10:00:57.147601 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:00:57 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:00:57 crc kubenswrapper[4741]: > Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.222458 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535001-bjqz4"] Feb 26 10:01:00 crc kubenswrapper[4741]: E0226 10:01:00.224809 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="extract-content" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.224841 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="extract-content" Feb 26 10:01:00 crc kubenswrapper[4741]: E0226 10:01:00.224884 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="extract-utilities" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.224894 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="extract-utilities" Feb 26 10:01:00 crc kubenswrapper[4741]: E0226 10:01:00.224930 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" containerName="collect-profiles" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.224939 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" containerName="collect-profiles" Feb 26 10:01:00 crc kubenswrapper[4741]: E0226 10:01:00.224977 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09171698-0f90-4ea8-ae5f-68ae73081d30" containerName="oc" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.224986 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="09171698-0f90-4ea8-ae5f-68ae73081d30" containerName="oc" Feb 26 10:01:00 crc kubenswrapper[4741]: E0226 10:01:00.225039 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.225047 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.225874 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc2487b-3f1b-4afe-8535-d8ef75d4cc2f" containerName="collect-profiles" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.225920 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="09171698-0f90-4ea8-ae5f-68ae73081d30" containerName="oc" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.225972 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="55725aa8-b544-4c46-863a-87d0a10990a4" containerName="registry-server" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.227684 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.249817 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535001-bjqz4"] Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.298346 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.298619 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.298868 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.299097 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtpq\" (UniqueName: \"kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.404638 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.404723 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvtpq\" (UniqueName: \"kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.404852 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.404934 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.463857 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.464631 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.466055 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.479838 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvtpq\" (UniqueName: \"kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq\") pod \"keystone-cron-29535001-bjqz4\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:00 crc kubenswrapper[4741]: I0226 10:01:00.578695 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:02 crc kubenswrapper[4741]: I0226 10:01:02.120274 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535001-bjqz4"] Feb 26 10:01:02 crc kubenswrapper[4741]: I0226 10:01:02.594137 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535001-bjqz4" event={"ID":"63befd1f-6a07-441b-8b78-6c8759671066","Type":"ContainerStarted","Data":"466f525224da31baf159358185d3786f86fa5795a7d2202036cd18bcecc4eb66"} Feb 26 10:01:02 crc kubenswrapper[4741]: I0226 10:01:02.594525 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535001-bjqz4" event={"ID":"63befd1f-6a07-441b-8b78-6c8759671066","Type":"ContainerStarted","Data":"5d46a6becb2553db1b4c403bd2807f2e424d95c0ae08ce5e7fb474819a90072b"} Feb 26 10:01:03 crc kubenswrapper[4741]: I0226 10:01:03.630939 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535001-bjqz4" podStartSLOduration=3.630907435 podStartE2EDuration="3.630907435s" podCreationTimestamp="2026-02-26 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 10:01:03.619010057 +0000 UTC m=+6498.614947434" watchObservedRunningTime="2026-02-26 10:01:03.630907435 +0000 UTC m=+6498.626844822" Feb 26 10:01:07 crc kubenswrapper[4741]: I0226 10:01:07.146791 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" probeResult="failure" output=< Feb 26 10:01:07 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:01:07 crc kubenswrapper[4741]: > Feb 26 10:01:11 crc kubenswrapper[4741]: I0226 10:01:11.105244 4741 scope.go:117] "RemoveContainer" containerID="bbe5839017308bcddbabd1f58ac751552b96cfdb61a6a90dc9e7e84e4f05269f" Feb 26 10:01:11 crc kubenswrapper[4741]: I0226 10:01:11.535404 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/kube-rbac-proxy/0.log" Feb 26 10:01:11 crc kubenswrapper[4741]: I0226 10:01:11.602709 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/manager/1.log" Feb 26 10:01:11 crc kubenswrapper[4741]: I0226 10:01:11.915528 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/manager/0.log" Feb 26 10:01:12 crc kubenswrapper[4741]: I0226 10:01:12.740585 4741 generic.go:334] "Generic (PLEG): container finished" podID="63befd1f-6a07-441b-8b78-6c8759671066" containerID="466f525224da31baf159358185d3786f86fa5795a7d2202036cd18bcecc4eb66" exitCode=0 Feb 26 10:01:12 crc kubenswrapper[4741]: I0226 10:01:12.740805 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535001-bjqz4" event={"ID":"63befd1f-6a07-441b-8b78-6c8759671066","Type":"ContainerDied","Data":"466f525224da31baf159358185d3786f86fa5795a7d2202036cd18bcecc4eb66"} Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.528056 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.637592 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys\") pod \"63befd1f-6a07-441b-8b78-6c8759671066\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.637658 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data\") pod \"63befd1f-6a07-441b-8b78-6c8759671066\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.637732 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle\") pod \"63befd1f-6a07-441b-8b78-6c8759671066\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.638209 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvtpq\" (UniqueName: \"kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq\") pod \"63befd1f-6a07-441b-8b78-6c8759671066\" (UID: \"63befd1f-6a07-441b-8b78-6c8759671066\") " Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.648414 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "63befd1f-6a07-441b-8b78-6c8759671066" (UID: "63befd1f-6a07-441b-8b78-6c8759671066"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.671044 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq" (OuterVolumeSpecName: "kube-api-access-mvtpq") pod "63befd1f-6a07-441b-8b78-6c8759671066" (UID: "63befd1f-6a07-441b-8b78-6c8759671066"). InnerVolumeSpecName "kube-api-access-mvtpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.690667 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63befd1f-6a07-441b-8b78-6c8759671066" (UID: "63befd1f-6a07-441b-8b78-6c8759671066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.738821 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data" (OuterVolumeSpecName: "config-data") pod "63befd1f-6a07-441b-8b78-6c8759671066" (UID: "63befd1f-6a07-441b-8b78-6c8759671066"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.742429 4741 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.742465 4741 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.742481 4741 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63befd1f-6a07-441b-8b78-6c8759671066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.742496 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvtpq\" (UniqueName: \"kubernetes.io/projected/63befd1f-6a07-441b-8b78-6c8759671066-kube-api-access-mvtpq\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.766348 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535001-bjqz4" event={"ID":"63befd1f-6a07-441b-8b78-6c8759671066","Type":"ContainerDied","Data":"5d46a6becb2553db1b4c403bd2807f2e424d95c0ae08ce5e7fb474819a90072b"} Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.766405 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d46a6becb2553db1b4c403bd2807f2e424d95c0ae08ce5e7fb474819a90072b" Feb 26 10:01:14 crc kubenswrapper[4741]: I0226 10:01:14.766443 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535001-bjqz4" Feb 26 10:01:16 crc kubenswrapper[4741]: I0226 10:01:16.156155 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:01:16 crc kubenswrapper[4741]: I0226 10:01:16.220226 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:01:16 crc kubenswrapper[4741]: I0226 10:01:16.403198 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 10:01:17 crc kubenswrapper[4741]: I0226 10:01:17.799941 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ljs62" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" containerID="cri-o://98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb" gracePeriod=2 Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.478142 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.566589 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52wrd\" (UniqueName: \"kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd\") pod \"d5ed8876-2789-4cab-b3d0-a9f684e04573\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.566899 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content\") pod \"d5ed8876-2789-4cab-b3d0-a9f684e04573\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.567264 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities\") pod \"d5ed8876-2789-4cab-b3d0-a9f684e04573\" (UID: \"d5ed8876-2789-4cab-b3d0-a9f684e04573\") " Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.567673 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities" (OuterVolumeSpecName: "utilities") pod "d5ed8876-2789-4cab-b3d0-a9f684e04573" (UID: "d5ed8876-2789-4cab-b3d0-a9f684e04573"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.569209 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.582477 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd" (OuterVolumeSpecName: "kube-api-access-52wrd") pod "d5ed8876-2789-4cab-b3d0-a9f684e04573" (UID: "d5ed8876-2789-4cab-b3d0-a9f684e04573"). InnerVolumeSpecName "kube-api-access-52wrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.642606 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5ed8876-2789-4cab-b3d0-a9f684e04573" (UID: "d5ed8876-2789-4cab-b3d0-a9f684e04573"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.672474 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52wrd\" (UniqueName: \"kubernetes.io/projected/d5ed8876-2789-4cab-b3d0-a9f684e04573-kube-api-access-52wrd\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.672533 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8876-2789-4cab-b3d0-a9f684e04573-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.817077 4741 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerID="98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb" exitCode=0 Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.817204 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerDied","Data":"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb"} Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.817285 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljs62" event={"ID":"d5ed8876-2789-4cab-b3d0-a9f684e04573","Type":"ContainerDied","Data":"740251907aa5a473330768888bdd6e9b01fd67f649d0ede719cb56630a562fae"} Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.817309 4741 scope.go:117] "RemoveContainer" containerID="98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.817229 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljs62" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.853278 4741 scope.go:117] "RemoveContainer" containerID="f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.881321 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.882098 4741 scope.go:117] "RemoveContainer" containerID="1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8" Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.896839 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ljs62"] Feb 26 10:01:18 crc kubenswrapper[4741]: I0226 10:01:18.998588 4741 scope.go:117] "RemoveContainer" containerID="98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb" Feb 26 10:01:19 crc kubenswrapper[4741]: E0226 10:01:18.999969 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb\": container with ID starting with 98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb not found: ID does not exist" containerID="98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.000023 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb"} err="failed to get container status \"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb\": rpc error: code = NotFound desc = could not find container \"98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb\": container with ID starting with 98ffe0da02b85f19cdc88955cd6c8a8455e20879ae70792bbca4dab3f8b7fddb not found: ID does not exist" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.000055 4741 scope.go:117] "RemoveContainer" containerID="f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af" Feb 26 10:01:19 crc kubenswrapper[4741]: E0226 10:01:19.000638 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af\": container with ID starting with f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af not found: ID does not exist" containerID="f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.000689 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af"} err="failed to get container status \"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af\": rpc error: code = NotFound desc = could not find container \"f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af\": container with ID starting with f6d0339f650ad45115834a34a998e9eb732a44843985a438e74146256bf929af not found: ID does not exist" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.000724 4741 scope.go:117] "RemoveContainer" containerID="1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8" Feb 26 10:01:19 crc kubenswrapper[4741]: E0226 10:01:19.001168 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8\": container with ID starting with 1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8 not found: ID does not exist" containerID="1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.001206 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8"} err="failed to get container status \"1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8\": rpc error: code = NotFound desc = could not find container \"1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8\": container with ID starting with 1f1cb2fd806d40a01fbb8fd6a0a85235449f21d99ca54049f7d8ec68055a81a8 not found: ID does not exist" Feb 26 10:01:19 crc kubenswrapper[4741]: I0226 10:01:19.803468 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" path="/var/lib/kubelet/pods/d5ed8876-2789-4cab-b3d0-a9f684e04573/volumes" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.149446 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.149942 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.149998 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.152463 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.152559 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" gracePeriod=600 Feb 26 10:01:25 crc kubenswrapper[4741]: E0226 10:01:25.309880 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.915550 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" exitCode=0 Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.915690 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576"} Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.915973 4741 scope.go:117] "RemoveContainer" containerID="f65ec1d0cad211656d2d423786ba8e61cca81111a476f3647f3bac068f0670b6" Feb 26 10:01:25 crc kubenswrapper[4741]: I0226 10:01:25.917271 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:01:25 crc kubenswrapper[4741]: E0226 10:01:25.917691 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.102814 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mwbdb_93b9d30b-e6dd-43d5-8599-eee30ab515a5/prometheus-operator/0.log" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.366737 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a/prometheus-operator-admission-webhook/0.log" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.421565 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_fec9499c-44d9-45b6-8361-76b7c0e2ed54/prometheus-operator-admission-webhook/0.log" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.630545 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9vkjt_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af/operator/0.log" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.715752 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8jc6t_432351e8-6adb-434f-b110-c141a4123d2c/observability-ui-dashboards/0.log" Feb 26 10:01:28 crc kubenswrapper[4741]: I0226 10:01:28.844209 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7c8vj_90d97168-5e93-4e51-b66e-d35fc864211d/perses-operator/0.log" Feb 26 10:01:39 crc kubenswrapper[4741]: I0226 10:01:39.792503 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:01:39 crc kubenswrapper[4741]: E0226 10:01:39.799539 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:01:46 crc kubenswrapper[4741]: I0226 10:01:46.904983 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-vqmnv_b8ef11d6-01e5-4e65-a38c-1631d3a423ba/cluster-logging-operator/0.log" Feb 26 10:01:47 crc kubenswrapper[4741]: I0226 10:01:47.455917 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-xvznb_8ea592fd-176d-496d-a1f2-67c6c3215be1/collector/0.log" Feb 26 10:01:47 crc kubenswrapper[4741]: I0226 10:01:47.691831 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_88cf7afe-52dd-437f-9739-e4f112fef5e8/loki-compactor/0.log" Feb 26 10:01:47 crc kubenswrapper[4741]: I0226 10:01:47.755772 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-fvl8r_091900f2-d6cc-4fbb-8b1b-f4216f868a9c/loki-distributor/0.log" Feb 26 10:01:47 crc kubenswrapper[4741]: I0226 10:01:47.813164 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7bbb966984-jqlhm_b029b8c8-35eb-4509-a29a-9ada4434b899/gateway/0.log" Feb 26 10:01:47 crc kubenswrapper[4741]: I0226 10:01:47.917874 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7bbb966984-jqlhm_b029b8c8-35eb-4509-a29a-9ada4434b899/opa/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.014537 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7bbb966984-qjtwt_aad6cae3-3b9d-4d9e-8549-55da6e10901d/gateway/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.076311 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7bbb966984-qjtwt_aad6cae3-3b9d-4d9e-8549-55da6e10901d/opa/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.255732 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_be2b1208-3e86-448e-beeb-86c6d953097d/loki-index-gateway/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.405403 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_5e9394a8-a585-40dc-8178-539b51408421/loki-ingester/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.510783 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-kvhl6_41e6c349-1fc0-4972-a080-55bb785a4bf7/loki-querier/0.log" Feb 26 10:01:48 crc kubenswrapper[4741]: I0226 10:01:48.675798 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-wjt7d_4896602d-060e-4777-957f-ff83ce8e812f/loki-query-frontend/0.log" Feb 26 10:01:50 crc kubenswrapper[4741]: I0226 10:01:50.789444 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:01:50 crc kubenswrapper[4741]: E0226 10:01:50.790247 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.154734 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535002-dsw9n"] Feb 26 10:02:00 crc kubenswrapper[4741]: E0226 10:02:00.156522 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="extract-content" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156546 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="extract-content" Feb 26 10:02:00 crc kubenswrapper[4741]: E0226 10:02:00.156575 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156589 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" Feb 26 10:02:00 crc kubenswrapper[4741]: E0226 10:02:00.156603 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63befd1f-6a07-441b-8b78-6c8759671066" containerName="keystone-cron" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156613 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="63befd1f-6a07-441b-8b78-6c8759671066" containerName="keystone-cron" Feb 26 10:02:00 crc kubenswrapper[4741]: E0226 10:02:00.156644 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="extract-utilities" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156653 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="extract-utilities" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156971 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="63befd1f-6a07-441b-8b78-6c8759671066" containerName="keystone-cron" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.156997 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ed8876-2789-4cab-b3d0-a9f684e04573" containerName="registry-server" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.162459 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.168776 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535002-dsw9n"] Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.183195 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.183715 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.183785 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.314926 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9k8w\" (UniqueName: \"kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w\") pod \"auto-csr-approver-29535002-dsw9n\" (UID: \"db97b7d2-03ce-4791-a8ba-92eade2b3dfd\") " pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.420146 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9k8w\" (UniqueName: \"kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w\") pod \"auto-csr-approver-29535002-dsw9n\" (UID: \"db97b7d2-03ce-4791-a8ba-92eade2b3dfd\") " pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.512733 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9k8w\" (UniqueName: \"kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w\") pod \"auto-csr-approver-29535002-dsw9n\" (UID: \"db97b7d2-03ce-4791-a8ba-92eade2b3dfd\") " pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:00 crc kubenswrapper[4741]: I0226 10:02:00.797357 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:01 crc kubenswrapper[4741]: I0226 10:02:01.601929 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535002-dsw9n"] Feb 26 10:02:01 crc kubenswrapper[4741]: I0226 10:02:01.628476 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 10:02:01 crc kubenswrapper[4741]: I0226 10:02:01.789766 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:02:01 crc kubenswrapper[4741]: E0226 10:02:01.790158 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:02:02 crc kubenswrapper[4741]: I0226 10:02:02.416850 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" event={"ID":"db97b7d2-03ce-4791-a8ba-92eade2b3dfd","Type":"ContainerStarted","Data":"bfc623367ef94846b7a694ff2e8d08b4d6f63bba148b696e85f7bae7f91ce745"} Feb 26 10:02:05 crc kubenswrapper[4741]: I0226 10:02:05.479001 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" event={"ID":"db97b7d2-03ce-4791-a8ba-92eade2b3dfd","Type":"ContainerStarted","Data":"3b3a4231c802faf1ea99dee1ebfa20e0a05c09570ccf48007577e0d52390c7d0"} Feb 26 10:02:05 crc kubenswrapper[4741]: I0226 10:02:05.515740 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" podStartSLOduration=3.466805474 podStartE2EDuration="5.515698137s" podCreationTimestamp="2026-02-26 10:02:00 +0000 UTC" firstStartedPulling="2026-02-26 10:02:01.616650554 +0000 UTC m=+6556.612587951" lastFinishedPulling="2026-02-26 10:02:03.665543227 +0000 UTC m=+6558.661480614" observedRunningTime="2026-02-26 10:02:05.49892196 +0000 UTC m=+6560.494859377" watchObservedRunningTime="2026-02-26 10:02:05.515698137 +0000 UTC m=+6560.511635524" Feb 26 10:02:06 crc kubenswrapper[4741]: I0226 10:02:06.664893 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-8nx8x_468a5a70-08db-488d-9f31-f9835091c5ee/kube-rbac-proxy/0.log" Feb 26 10:02:06 crc kubenswrapper[4741]: I0226 10:02:06.900593 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-8nx8x_468a5a70-08db-488d-9f31-f9835091c5ee/controller/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.017208 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-frr-files/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.232068 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-metrics/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.251481 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-reloader/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.278632 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-reloader/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.282604 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-frr-files/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.518781 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-metrics/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.564104 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-frr-files/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.590676 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-reloader/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.628431 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-metrics/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.824541 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-metrics/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.828374 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-frr-files/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.873868 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/cp-reloader/0.log" Feb 26 10:02:07 crc kubenswrapper[4741]: I0226 10:02:07.902403 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/controller/0.log" Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.250983 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/frr-metrics/0.log" Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.256672 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/kube-rbac-proxy/0.log" Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.456618 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/kube-rbac-proxy-frr/0.log" Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.518391 4741 generic.go:334] "Generic (PLEG): container finished" podID="db97b7d2-03ce-4791-a8ba-92eade2b3dfd" containerID="3b3a4231c802faf1ea99dee1ebfa20e0a05c09570ccf48007577e0d52390c7d0" exitCode=0 Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.518624 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" event={"ID":"db97b7d2-03ce-4791-a8ba-92eade2b3dfd","Type":"ContainerDied","Data":"3b3a4231c802faf1ea99dee1ebfa20e0a05c09570ccf48007577e0d52390c7d0"} Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.525525 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/reloader/0.log" Feb 26 10:02:08 crc kubenswrapper[4741]: I0226 10:02:08.768724 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/frr/1.log" Feb 26 10:02:09 crc kubenswrapper[4741]: I0226 10:02:09.165829 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-mcrfb_6fe5145b-bbf9-47ac-b53e-1282479db87d/frr-k8s-webhook-server/0.log" Feb 26 10:02:09 crc kubenswrapper[4741]: I0226 10:02:09.326469 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64545648d6-pt5sq_bb123f4a-b95e-413e-8d1b-a5efc5cbacdd/manager/1.log" Feb 26 10:02:09 crc kubenswrapper[4741]: I0226 10:02:09.450032 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64545648d6-pt5sq_bb123f4a-b95e-413e-8d1b-a5efc5cbacdd/manager/0.log" Feb 26 10:02:09 crc kubenswrapper[4741]: I0226 10:02:09.627781 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-76f9f89dd9-x2csm_85855f0c-ab53-44f0-8f0d-ac0299c5fc24/webhook-server/0.log" Feb 26 10:02:09 crc kubenswrapper[4741]: I0226 10:02:09.765725 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tdw95_4bf58d3b-55b2-408e-ab70-84e96ef92a64/kube-rbac-proxy/0.log" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.086961 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.174835 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9k8w\" (UniqueName: \"kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w\") pod \"db97b7d2-03ce-4791-a8ba-92eade2b3dfd\" (UID: \"db97b7d2-03ce-4791-a8ba-92eade2b3dfd\") " Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.186427 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w" (OuterVolumeSpecName: "kube-api-access-s9k8w") pod "db97b7d2-03ce-4791-a8ba-92eade2b3dfd" (UID: "db97b7d2-03ce-4791-a8ba-92eade2b3dfd"). InnerVolumeSpecName "kube-api-access-s9k8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.278440 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9k8w\" (UniqueName: \"kubernetes.io/projected/db97b7d2-03ce-4791-a8ba-92eade2b3dfd-kube-api-access-s9k8w\") on node \"crc\" DevicePath \"\"" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.562095 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" event={"ID":"db97b7d2-03ce-4791-a8ba-92eade2b3dfd","Type":"ContainerDied","Data":"bfc623367ef94846b7a694ff2e8d08b4d6f63bba148b696e85f7bae7f91ce745"} Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.562166 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc623367ef94846b7a694ff2e8d08b4d6f63bba148b696e85f7bae7f91ce745" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.562237 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535002-dsw9n" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.654718 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534996-cqzpn"] Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.669234 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534996-cqzpn"] Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.777900 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tdw95_4bf58d3b-55b2-408e-ab70-84e96ef92a64/speaker/0.log" Feb 26 10:02:10 crc kubenswrapper[4741]: I0226 10:02:10.796518 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4pcr_a2705136-6518-4339-b135-2d6f71d0fe6b/frr/0.log" Feb 26 10:02:11 crc kubenswrapper[4741]: I0226 10:02:11.800992 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e032146-0ca3-4914-9303-16d588739174" path="/var/lib/kubelet/pods/3e032146-0ca3-4914-9303-16d588739174/volumes" Feb 26 10:02:13 crc kubenswrapper[4741]: I0226 10:02:13.788252 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:02:13 crc kubenswrapper[4741]: E0226 10:02:13.789219 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:02:24 crc kubenswrapper[4741]: I0226 10:02:24.787288 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:02:24 crc kubenswrapper[4741]: E0226 10:02:24.788253 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:02:25 crc kubenswrapper[4741]: I0226 10:02:25.598925 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/util/0.log" Feb 26 10:02:25 crc kubenswrapper[4741]: I0226 10:02:25.862601 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/util/0.log" Feb 26 10:02:25 crc kubenswrapper[4741]: I0226 10:02:25.879035 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/pull/0.log" Feb 26 10:02:25 crc kubenswrapper[4741]: I0226 10:02:25.879219 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/pull/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.144459 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/util/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.157925 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/pull/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.165559 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829m6vc_fa5f5a60-f494-43cb-9137-51ab0568037a/extract/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.330479 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/util/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.627998 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/pull/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.667348 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/pull/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.677037 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/util/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.817039 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/util/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.894231 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/pull/0.log" Feb 26 10:02:26 crc kubenswrapper[4741]: I0226 10:02:26.913555 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19spqr7_e17317b7-5def-4061-b7f8-a763e30b9868/extract/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.073408 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/util/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.257124 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/pull/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.302212 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/pull/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.506958 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/util/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.556671 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/util/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.604956 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/extract/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.646884 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fxptz_652b2cb9-4551-45e5-a4b0-6f6720ec0792/pull/0.log" Feb 26 10:02:27 crc kubenswrapper[4741]: I0226 10:02:27.810817 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-utilities/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.098578 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-content/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.110313 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-content/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.120245 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-utilities/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.318198 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-utilities/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.404058 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/extract-content/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.662326 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-utilities/0.log" Feb 26 10:02:28 crc kubenswrapper[4741]: I0226 10:02:28.984589 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-content/0.log" Feb 26 10:02:29 crc kubenswrapper[4741]: I0226 10:02:29.053263 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-utilities/0.log" Feb 26 10:02:29 crc kubenswrapper[4741]: I0226 10:02:29.078482 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-content/0.log" Feb 26 10:02:29 crc kubenswrapper[4741]: I0226 10:02:29.292628 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-content/0.log" Feb 26 10:02:29 crc kubenswrapper[4741]: I0226 10:02:29.321876 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/extract-utilities/0.log" Feb 26 10:02:29 crc kubenswrapper[4741]: I0226 10:02:29.625347 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/util/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.368812 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/util/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.369994 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/pull/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.437545 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vcnd_93b4b5c9-a048-4219-86a9-ef1ff11cc024/registry-server/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.486812 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/pull/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.837652 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/util/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.837790 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/extract/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.859704 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4txz6g_8a5cc643-d3e2-4726-aac9-845145612f0e/pull/0.log" Feb 26 10:02:30 crc kubenswrapper[4741]: I0226 10:02:30.961341 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mscg4_cfb44914-3bd9-4c8c-937b-cccc55045fc6/registry-server/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.077481 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/util/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.258660 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/pull/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.292303 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/pull/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.309908 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/util/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.527006 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/util/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.544988 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/extract/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.578084 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gw76z_8487ba68-4c63-4d44-a687-1ca047c859d2/pull/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.605500 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tdzlz_dc5a16f1-f482-4a9f-81f0-b21fa200d4da/marketplace-operator/0.log" Feb 26 10:02:31 crc kubenswrapper[4741]: I0226 10:02:31.803757 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.032056 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.034414 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-content/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.066526 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-content/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.267359 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.416230 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/extract-content/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.430261 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.581391 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fzxmn_4ddcb17f-6b4a-4194-aab9-e24dc49c75e0/registry-server/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.631077 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-content/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.650290 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.679464 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-content/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.916203 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-utilities/0.log" Feb 26 10:02:32 crc kubenswrapper[4741]: I0226 10:02:32.953260 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/extract-content/0.log" Feb 26 10:02:33 crc kubenswrapper[4741]: I0226 10:02:33.788985 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28q5n_c71842fc-fda8-481f-96d6-64b811178a92/registry-server/0.log" Feb 26 10:02:36 crc kubenswrapper[4741]: I0226 10:02:36.788157 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:02:36 crc kubenswrapper[4741]: E0226 10:02:36.789039 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.115211 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54874b9fc8-cgtm9_5c7ecfae-042e-4aad-8b9d-2e6a4284d75a/prometheus-operator-admission-webhook/0.log" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.126329 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54874b9fc8-vftgd_fec9499c-44d9-45b6-8361-76b7c0e2ed54/prometheus-operator-admission-webhook/0.log" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.130755 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mwbdb_93b9d30b-e6dd-43d5-8599-eee30ab515a5/prometheus-operator/0.log" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.330637 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8jc6t_432351e8-6adb-434f-b110-c141a4123d2c/observability-ui-dashboards/0.log" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.379294 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7c8vj_90d97168-5e93-4e51-b66e-d35fc864211d/perses-operator/0.log" Feb 26 10:02:48 crc kubenswrapper[4741]: I0226 10:02:48.384764 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9vkjt_480c4db0-8b7a-4ef8-a2e6-c7289a9f21af/operator/0.log" Feb 26 10:02:49 crc kubenswrapper[4741]: I0226 10:02:49.788166 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:02:49 crc kubenswrapper[4741]: E0226 10:02:49.789047 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:00 crc kubenswrapper[4741]: I0226 10:03:00.788718 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:03:00 crc kubenswrapper[4741]: E0226 10:03:00.789662 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:02 crc kubenswrapper[4741]: I0226 10:03:02.233490 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/kube-rbac-proxy/0.log" Feb 26 10:03:02 crc kubenswrapper[4741]: I0226 10:03:02.312914 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/manager/1.log" Feb 26 10:03:02 crc kubenswrapper[4741]: I0226 10:03:02.375273 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c89769cfb-mbqvs_b7349090-2a42-41d0-9bed-5624de634744/manager/0.log" Feb 26 10:03:10 crc kubenswrapper[4741]: E0226 10:03:10.965224 4741 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Feb 26 10:03:11 crc kubenswrapper[4741]: I0226 10:03:11.455472 4741 scope.go:117] "RemoveContainer" containerID="d96a31babe63bef2ab06f2d7fd616b418f13f328d0435e57913b34cad6a0840c" Feb 26 10:03:13 crc kubenswrapper[4741]: I0226 10:03:13.788065 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:03:13 crc kubenswrapper[4741]: E0226 10:03:13.789130 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:26 crc kubenswrapper[4741]: I0226 10:03:26.788189 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:03:26 crc kubenswrapper[4741]: E0226 10:03:26.789259 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:29 crc kubenswrapper[4741]: E0226 10:03:29.667703 4741 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.166:59648->38.102.83.166:36527: write tcp 38.102.83.166:59648->38.102.83.166:36527: write: broken pipe Feb 26 10:03:37 crc kubenswrapper[4741]: I0226 10:03:37.787665 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:03:37 crc kubenswrapper[4741]: E0226 10:03:37.788521 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:50 crc kubenswrapper[4741]: I0226 10:03:50.788543 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:03:50 crc kubenswrapper[4741]: E0226 10:03:50.789399 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.020155 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:03:54 crc kubenswrapper[4741]: E0226 10:03:54.021548 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db97b7d2-03ce-4791-a8ba-92eade2b3dfd" containerName="oc" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.021567 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="db97b7d2-03ce-4791-a8ba-92eade2b3dfd" containerName="oc" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.021912 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="db97b7d2-03ce-4791-a8ba-92eade2b3dfd" containerName="oc" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.032369 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.154954 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.195091 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bfrj\" (UniqueName: \"kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.195194 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.195229 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.298014 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bfrj\" (UniqueName: \"kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.298133 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.298192 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.298742 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.298808 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.342720 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bfrj\" (UniqueName: \"kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj\") pod \"community-operators-cqqlj\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:54 crc kubenswrapper[4741]: I0226 10:03:54.359177 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:03:56 crc kubenswrapper[4741]: I0226 10:03:56.176676 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:03:56 crc kubenswrapper[4741]: I0226 10:03:56.535393 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerStarted","Data":"e3a6a04f0e747f06151624f68277e4e1bccff0980df866238a7a4fc36ce480e6"} Feb 26 10:03:57 crc kubenswrapper[4741]: I0226 10:03:57.552139 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerDied","Data":"a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf"} Feb 26 10:03:57 crc kubenswrapper[4741]: I0226 10:03:57.552514 4741 generic.go:334] "Generic (PLEG): container finished" podID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerID="a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf" exitCode=0 Feb 26 10:03:59 crc kubenswrapper[4741]: I0226 10:03:59.578081 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerStarted","Data":"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a"} Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.159341 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535004-7mn4j"] Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.161747 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.164656 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.164913 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.167889 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.173155 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535004-7mn4j"] Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.192963 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drg2\" (UniqueName: \"kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2\") pod \"auto-csr-approver-29535004-7mn4j\" (UID: \"6a9eedff-1d03-4612-991c-7022c8f34cea\") " pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.295791 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drg2\" (UniqueName: \"kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2\") pod \"auto-csr-approver-29535004-7mn4j\" (UID: \"6a9eedff-1d03-4612-991c-7022c8f34cea\") " pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.319539 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drg2\" (UniqueName: \"kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2\") pod \"auto-csr-approver-29535004-7mn4j\" (UID: \"6a9eedff-1d03-4612-991c-7022c8f34cea\") " pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:00 crc kubenswrapper[4741]: I0226 10:04:00.483766 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:01 crc kubenswrapper[4741]: I0226 10:04:01.062322 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535004-7mn4j"] Feb 26 10:04:01 crc kubenswrapper[4741]: W0226 10:04:01.067177 4741 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a9eedff_1d03_4612_991c_7022c8f34cea.slice/crio-de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48 WatchSource:0}: Error finding container de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48: Status 404 returned error can't find the container with id de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48 Feb 26 10:04:01 crc kubenswrapper[4741]: I0226 10:04:01.606897 4741 generic.go:334] "Generic (PLEG): container finished" podID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerID="5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a" exitCode=0 Feb 26 10:04:01 crc kubenswrapper[4741]: I0226 10:04:01.607155 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerDied","Data":"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a"} Feb 26 10:04:01 crc kubenswrapper[4741]: I0226 10:04:01.613107 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" event={"ID":"6a9eedff-1d03-4612-991c-7022c8f34cea","Type":"ContainerStarted","Data":"de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48"} Feb 26 10:04:02 crc kubenswrapper[4741]: I0226 10:04:02.632759 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerStarted","Data":"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9"} Feb 26 10:04:03 crc kubenswrapper[4741]: I0226 10:04:03.688196 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqqlj" podStartSLOduration=6.015449688 podStartE2EDuration="10.688166277s" podCreationTimestamp="2026-02-26 10:03:53 +0000 UTC" firstStartedPulling="2026-02-26 10:03:57.554894681 +0000 UTC m=+6672.550832068" lastFinishedPulling="2026-02-26 10:04:02.22761127 +0000 UTC m=+6677.223548657" observedRunningTime="2026-02-26 10:04:03.675513798 +0000 UTC m=+6678.671451195" watchObservedRunningTime="2026-02-26 10:04:03.688166277 +0000 UTC m=+6678.684103664" Feb 26 10:04:04 crc kubenswrapper[4741]: I0226 10:04:04.361458 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:04 crc kubenswrapper[4741]: I0226 10:04:04.361737 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:04 crc kubenswrapper[4741]: I0226 10:04:04.663780 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" event={"ID":"6a9eedff-1d03-4612-991c-7022c8f34cea","Type":"ContainerStarted","Data":"d1c54b1aee3ba5e719a3c0f03a7df9bb8c2bbee437e3c696c1c6aeac91f10e1b"} Feb 26 10:04:04 crc kubenswrapper[4741]: I0226 10:04:04.690764 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" podStartSLOduration=2.222815433 podStartE2EDuration="4.690735446s" podCreationTimestamp="2026-02-26 10:04:00 +0000 UTC" firstStartedPulling="2026-02-26 10:04:01.070514325 +0000 UTC m=+6676.066451722" lastFinishedPulling="2026-02-26 10:04:03.538434348 +0000 UTC m=+6678.534371735" observedRunningTime="2026-02-26 10:04:04.678031545 +0000 UTC m=+6679.673968932" watchObservedRunningTime="2026-02-26 10:04:04.690735446 +0000 UTC m=+6679.686672833" Feb 26 10:04:04 crc kubenswrapper[4741]: I0226 10:04:04.789699 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:04:04 crc kubenswrapper[4741]: E0226 10:04:04.790331 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:04:05 crc kubenswrapper[4741]: I0226 10:04:05.419403 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cqqlj" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="registry-server" probeResult="failure" output=< Feb 26 10:04:05 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:04:05 crc kubenswrapper[4741]: > Feb 26 10:04:06 crc kubenswrapper[4741]: I0226 10:04:06.688010 4741 generic.go:334] "Generic (PLEG): container finished" podID="6a9eedff-1d03-4612-991c-7022c8f34cea" containerID="d1c54b1aee3ba5e719a3c0f03a7df9bb8c2bbee437e3c696c1c6aeac91f10e1b" exitCode=0 Feb 26 10:04:06 crc kubenswrapper[4741]: I0226 10:04:06.688087 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" event={"ID":"6a9eedff-1d03-4612-991c-7022c8f34cea","Type":"ContainerDied","Data":"d1c54b1aee3ba5e719a3c0f03a7df9bb8c2bbee437e3c696c1c6aeac91f10e1b"} Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.159407 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.234604 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drg2\" (UniqueName: \"kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2\") pod \"6a9eedff-1d03-4612-991c-7022c8f34cea\" (UID: \"6a9eedff-1d03-4612-991c-7022c8f34cea\") " Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.243673 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2" (OuterVolumeSpecName: "kube-api-access-4drg2") pod "6a9eedff-1d03-4612-991c-7022c8f34cea" (UID: "6a9eedff-1d03-4612-991c-7022c8f34cea"). InnerVolumeSpecName "kube-api-access-4drg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.341916 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drg2\" (UniqueName: \"kubernetes.io/projected/6a9eedff-1d03-4612-991c-7022c8f34cea-kube-api-access-4drg2\") on node \"crc\" DevicePath \"\"" Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.730974 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" event={"ID":"6a9eedff-1d03-4612-991c-7022c8f34cea","Type":"ContainerDied","Data":"de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48"} Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.731029 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4b6c2db75f1bb04d99b3cd56447745037a2a57d2bdc6b28ba649fc0e7a4a48" Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.731037 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535004-7mn4j" Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.792873 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534998-jqwwg"] Feb 26 10:04:08 crc kubenswrapper[4741]: I0226 10:04:08.808674 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534998-jqwwg"] Feb 26 10:04:09 crc kubenswrapper[4741]: I0226 10:04:09.802537 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f86dcc-1861-414d-8554-f65b604a21b0" path="/var/lib/kubelet/pods/a6f86dcc-1861-414d-8554-f65b604a21b0/volumes" Feb 26 10:04:11 crc kubenswrapper[4741]: I0226 10:04:11.588136 4741 scope.go:117] "RemoveContainer" containerID="84034d73d633f131cbb2d30772fe9dcc960fc3625d9931aa3068074c2cf37fdf" Feb 26 10:04:11 crc kubenswrapper[4741]: I0226 10:04:11.658459 4741 scope.go:117] "RemoveContainer" containerID="f4de9a25dc509c99bc8e4a7e3a5d5cf22068bc3c2b5a9b26b009329202a39e2e" Feb 26 10:04:14 crc kubenswrapper[4741]: I0226 10:04:14.433903 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:14 crc kubenswrapper[4741]: I0226 10:04:14.506857 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:14 crc kubenswrapper[4741]: I0226 10:04:14.683240 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:04:15 crc kubenswrapper[4741]: I0226 10:04:15.852621 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqqlj" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="registry-server" containerID="cri-o://321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9" gracePeriod=2 Feb 26 10:04:16 crc kubenswrapper[4741]: E0226 10:04:16.146967 4741 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16269f76_f249_4ab9_81ac_c6bd4706098b.slice/crio-321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16269f76_f249_4ab9_81ac_c6bd4706098b.slice/crio-conmon-321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9.scope\": RecentStats: unable to find data in memory cache]" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.462897 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.592407 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content\") pod \"16269f76-f249-4ab9-81ac-c6bd4706098b\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.592660 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bfrj\" (UniqueName: \"kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj\") pod \"16269f76-f249-4ab9-81ac-c6bd4706098b\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.592830 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities\") pod \"16269f76-f249-4ab9-81ac-c6bd4706098b\" (UID: \"16269f76-f249-4ab9-81ac-c6bd4706098b\") " Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.595123 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities" (OuterVolumeSpecName: "utilities") pod "16269f76-f249-4ab9-81ac-c6bd4706098b" (UID: "16269f76-f249-4ab9-81ac-c6bd4706098b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.604012 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj" (OuterVolumeSpecName: "kube-api-access-8bfrj") pod "16269f76-f249-4ab9-81ac-c6bd4706098b" (UID: "16269f76-f249-4ab9-81ac-c6bd4706098b"). InnerVolumeSpecName "kube-api-access-8bfrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.650983 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16269f76-f249-4ab9-81ac-c6bd4706098b" (UID: "16269f76-f249-4ab9-81ac-c6bd4706098b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.696258 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.696297 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16269f76-f249-4ab9-81ac-c6bd4706098b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.696310 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bfrj\" (UniqueName: \"kubernetes.io/projected/16269f76-f249-4ab9-81ac-c6bd4706098b-kube-api-access-8bfrj\") on node \"crc\" DevicePath \"\"" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.789054 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:04:16 crc kubenswrapper[4741]: E0226 10:04:16.789420 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.869422 4741 generic.go:334] "Generic (PLEG): container finished" podID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerID="321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9" exitCode=0 Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.869476 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerDied","Data":"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9"} Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.869510 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqqlj" event={"ID":"16269f76-f249-4ab9-81ac-c6bd4706098b","Type":"ContainerDied","Data":"e3a6a04f0e747f06151624f68277e4e1bccff0980df866238a7a4fc36ce480e6"} Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.869516 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqqlj" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.869546 4741 scope.go:117] "RemoveContainer" containerID="321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.899228 4741 scope.go:117] "RemoveContainer" containerID="5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a" Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.920906 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.933276 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqqlj"] Feb 26 10:04:16 crc kubenswrapper[4741]: I0226 10:04:16.934505 4741 scope.go:117] "RemoveContainer" containerID="a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.000536 4741 scope.go:117] "RemoveContainer" containerID="321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9" Feb 26 10:04:17 crc kubenswrapper[4741]: E0226 10:04:17.006244 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9\": container with ID starting with 321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9 not found: ID does not exist" containerID="321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.006308 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9"} err="failed to get container status \"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9\": rpc error: code = NotFound desc = could not find container \"321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9\": container with ID starting with 321736f64d7bdb2f3847075ee0d11243dfb493c1478a8f6df057d48904c18da9 not found: ID does not exist" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.006767 4741 scope.go:117] "RemoveContainer" containerID="5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a" Feb 26 10:04:17 crc kubenswrapper[4741]: E0226 10:04:17.007560 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a\": container with ID starting with 5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a not found: ID does not exist" containerID="5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.007581 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a"} err="failed to get container status \"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a\": rpc error: code = NotFound desc = could not find container \"5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a\": container with ID starting with 5ab71dabe0c79ad7971a9206ff0e483726321734192acee91060e9eb1addd02a not found: ID does not exist" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.007597 4741 scope.go:117] "RemoveContainer" containerID="a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf" Feb 26 10:04:17 crc kubenswrapper[4741]: E0226 10:04:17.008530 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf\": container with ID starting with a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf not found: ID does not exist" containerID="a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.008607 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf"} err="failed to get container status \"a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf\": rpc error: code = NotFound desc = could not find container \"a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf\": container with ID starting with a50e916713351fc53d207e597b59c59b4faf11f056c8ec4e18f81f28adb8d4bf not found: ID does not exist" Feb 26 10:04:17 crc kubenswrapper[4741]: I0226 10:04:17.802790 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" path="/var/lib/kubelet/pods/16269f76-f249-4ab9-81ac-c6bd4706098b/volumes" Feb 26 10:04:27 crc kubenswrapper[4741]: I0226 10:04:27.788989 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:04:27 crc kubenswrapper[4741]: E0226 10:04:27.790157 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.511068 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:04:34 crc kubenswrapper[4741]: E0226 10:04:34.512393 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9eedff-1d03-4612-991c-7022c8f34cea" containerName="oc" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512414 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9eedff-1d03-4612-991c-7022c8f34cea" containerName="oc" Feb 26 10:04:34 crc kubenswrapper[4741]: E0226 10:04:34.512446 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="extract-utilities" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512453 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="extract-utilities" Feb 26 10:04:34 crc kubenswrapper[4741]: E0226 10:04:34.512477 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="registry-server" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512484 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="registry-server" Feb 26 10:04:34 crc kubenswrapper[4741]: E0226 10:04:34.512505 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="extract-content" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512513 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="extract-content" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512815 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="16269f76-f249-4ab9-81ac-c6bd4706098b" containerName="registry-server" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.512861 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9eedff-1d03-4612-991c-7022c8f34cea" containerName="oc" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.515131 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.553417 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.591486 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.592124 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.592343 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqxm\" (UniqueName: \"kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.695552 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.695932 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.696011 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqxm\" (UniqueName: \"kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.696098 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.696546 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.717614 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqxm\" (UniqueName: \"kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm\") pod \"redhat-operators-bk4pb\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:34 crc kubenswrapper[4741]: I0226 10:04:34.843080 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:04:35 crc kubenswrapper[4741]: I0226 10:04:35.400219 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:04:36 crc kubenswrapper[4741]: I0226 10:04:36.116085 4741 generic.go:334] "Generic (PLEG): container finished" podID="67bb8adf-074e-4394-8188-048248c7bbdc" containerID="7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07" exitCode=0 Feb 26 10:04:36 crc kubenswrapper[4741]: I0226 10:04:36.116158 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerDied","Data":"7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07"} Feb 26 10:04:36 crc kubenswrapper[4741]: I0226 10:04:36.116204 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerStarted","Data":"107ed58cacb653cdcb19e400c1ceab87140103d5ce75e104476036927e0dd594"} Feb 26 10:04:39 crc kubenswrapper[4741]: I0226 10:04:39.152481 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerStarted","Data":"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f"} Feb 26 10:04:40 crc kubenswrapper[4741]: I0226 10:04:40.788449 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:04:40 crc kubenswrapper[4741]: E0226 10:04:40.789200 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:04:53 crc kubenswrapper[4741]: I0226 10:04:53.787834 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:04:53 crc kubenswrapper[4741]: E0226 10:04:53.788893 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:04:55 crc kubenswrapper[4741]: I0226 10:04:55.360196 4741 generic.go:334] "Generic (PLEG): container finished" podID="67bb8adf-074e-4394-8188-048248c7bbdc" containerID="8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f" exitCode=0 Feb 26 10:04:55 crc kubenswrapper[4741]: I0226 10:04:55.360467 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerDied","Data":"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f"} Feb 26 10:04:56 crc kubenswrapper[4741]: I0226 10:04:56.381934 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerStarted","Data":"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4"} Feb 26 10:04:56 crc kubenswrapper[4741]: I0226 10:04:56.402813 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bk4pb" podStartSLOduration=2.720022724 podStartE2EDuration="22.402790732s" podCreationTimestamp="2026-02-26 10:04:34 +0000 UTC" firstStartedPulling="2026-02-26 10:04:36.118228476 +0000 UTC m=+6711.114165863" lastFinishedPulling="2026-02-26 10:04:55.800996484 +0000 UTC m=+6730.796933871" observedRunningTime="2026-02-26 10:04:56.401358972 +0000 UTC m=+6731.397296359" watchObservedRunningTime="2026-02-26 10:04:56.402790732 +0000 UTC m=+6731.398728119" Feb 26 10:05:04 crc kubenswrapper[4741]: I0226 10:05:04.787510 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:05:04 crc kubenswrapper[4741]: E0226 10:05:04.789622 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:05:04 crc kubenswrapper[4741]: I0226 10:05:04.843432 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:04 crc kubenswrapper[4741]: I0226 10:05:04.843492 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:05 crc kubenswrapper[4741]: I0226 10:05:05.896157 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk4pb" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" probeResult="failure" output=< Feb 26 10:05:05 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:05:05 crc kubenswrapper[4741]: > Feb 26 10:05:15 crc kubenswrapper[4741]: I0226 10:05:15.903455 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk4pb" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" probeResult="failure" output=< Feb 26 10:05:15 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:05:15 crc kubenswrapper[4741]: > Feb 26 10:05:19 crc kubenswrapper[4741]: I0226 10:05:19.788439 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:05:19 crc kubenswrapper[4741]: E0226 10:05:19.791593 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:05:25 crc kubenswrapper[4741]: I0226 10:05:25.903156 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk4pb" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" probeResult="failure" output=< Feb 26 10:05:25 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:05:25 crc kubenswrapper[4741]: > Feb 26 10:05:33 crc kubenswrapper[4741]: I0226 10:05:33.788742 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:05:33 crc kubenswrapper[4741]: E0226 10:05:33.791230 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:05:35 crc kubenswrapper[4741]: I0226 10:05:35.898878 4741 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk4pb" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" probeResult="failure" output=< Feb 26 10:05:35 crc kubenswrapper[4741]: timeout: failed to connect service ":50051" within 1s Feb 26 10:05:35 crc kubenswrapper[4741]: > Feb 26 10:05:37 crc kubenswrapper[4741]: I0226 10:05:37.963925 4741 generic.go:334] "Generic (PLEG): container finished" podID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerID="e74ebaabc25295a499b890f4e76d17ed316d3bfb65d9afec1a6be27bc8f61803" exitCode=0 Feb 26 10:05:37 crc kubenswrapper[4741]: I0226 10:05:37.964057 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" event={"ID":"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d","Type":"ContainerDied","Data":"e74ebaabc25295a499b890f4e76d17ed316d3bfb65d9afec1a6be27bc8f61803"} Feb 26 10:05:37 crc kubenswrapper[4741]: I0226 10:05:37.965457 4741 scope.go:117] "RemoveContainer" containerID="e74ebaabc25295a499b890f4e76d17ed316d3bfb65d9afec1a6be27bc8f61803" Feb 26 10:05:38 crc kubenswrapper[4741]: I0226 10:05:38.923651 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dmdkd_must-gather-xtkwd_9c3a910a-f9e9-42cc-894b-d73a7fd35c4d/gather/0.log" Feb 26 10:05:44 crc kubenswrapper[4741]: I0226 10:05:44.910494 4741 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:44 crc kubenswrapper[4741]: I0226 10:05:44.978512 4741 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:45 crc kubenswrapper[4741]: I0226 10:05:45.162371 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.061060 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bk4pb" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" containerID="cri-o://40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4" gracePeriod=2 Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.767340 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.789311 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:05:46 crc kubenswrapper[4741]: E0226 10:05:46.789633 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.935667 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities\") pod \"67bb8adf-074e-4394-8188-048248c7bbdc\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.936059 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbqxm\" (UniqueName: \"kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm\") pod \"67bb8adf-074e-4394-8188-048248c7bbdc\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.936249 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content\") pod \"67bb8adf-074e-4394-8188-048248c7bbdc\" (UID: \"67bb8adf-074e-4394-8188-048248c7bbdc\") " Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.936541 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities" (OuterVolumeSpecName: "utilities") pod "67bb8adf-074e-4394-8188-048248c7bbdc" (UID: "67bb8adf-074e-4394-8188-048248c7bbdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.938326 4741 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 10:05:46 crc kubenswrapper[4741]: I0226 10:05:46.965029 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm" (OuterVolumeSpecName: "kube-api-access-qbqxm") pod "67bb8adf-074e-4394-8188-048248c7bbdc" (UID: "67bb8adf-074e-4394-8188-048248c7bbdc"). InnerVolumeSpecName "kube-api-access-qbqxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.041171 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbqxm\" (UniqueName: \"kubernetes.io/projected/67bb8adf-074e-4394-8188-048248c7bbdc-kube-api-access-qbqxm\") on node \"crc\" DevicePath \"\"" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.065065 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67bb8adf-074e-4394-8188-048248c7bbdc" (UID: "67bb8adf-074e-4394-8188-048248c7bbdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.077237 4741 generic.go:334] "Generic (PLEG): container finished" podID="67bb8adf-074e-4394-8188-048248c7bbdc" containerID="40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4" exitCode=0 Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.077712 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerDied","Data":"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4"} Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.077751 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk4pb" event={"ID":"67bb8adf-074e-4394-8188-048248c7bbdc","Type":"ContainerDied","Data":"107ed58cacb653cdcb19e400c1ceab87140103d5ce75e104476036927e0dd594"} Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.077772 4741 scope.go:117] "RemoveContainer" containerID="40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.077984 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk4pb" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.137220 4741 scope.go:117] "RemoveContainer" containerID="8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.137536 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.144632 4741 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67bb8adf-074e-4394-8188-048248c7bbdc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.152083 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bk4pb"] Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.171870 4741 scope.go:117] "RemoveContainer" containerID="7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.227303 4741 scope.go:117] "RemoveContainer" containerID="40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4" Feb 26 10:05:47 crc kubenswrapper[4741]: E0226 10:05:47.228009 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4\": container with ID starting with 40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4 not found: ID does not exist" containerID="40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.228307 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4"} err="failed to get container status \"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4\": rpc error: code = NotFound desc = could not find container \"40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4\": container with ID starting with 40aa2193ce5dc24f4f2819b24472ead89b16740dbc9d891cd67a39a98ea7b9e4 not found: ID does not exist" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.228361 4741 scope.go:117] "RemoveContainer" containerID="8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f" Feb 26 10:05:47 crc kubenswrapper[4741]: E0226 10:05:47.228845 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f\": container with ID starting with 8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f not found: ID does not exist" containerID="8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.228890 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f"} err="failed to get container status \"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f\": rpc error: code = NotFound desc = could not find container \"8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f\": container with ID starting with 8112930b2883bd74cc65a1c24e1535dcd333c68535abe127d7696a6fcd8dd38f not found: ID does not exist" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.228920 4741 scope.go:117] "RemoveContainer" containerID="7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07" Feb 26 10:05:47 crc kubenswrapper[4741]: E0226 10:05:47.229367 4741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07\": container with ID starting with 7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07 not found: ID does not exist" containerID="7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.229440 4741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07"} err="failed to get container status \"7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07\": rpc error: code = NotFound desc = could not find container \"7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07\": container with ID starting with 7bf2a5ca4a86ace528678f263fb60e89be2d8cf46e917e40782dadfc07c2ac07 not found: ID does not exist" Feb 26 10:05:47 crc kubenswrapper[4741]: I0226 10:05:47.826998 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" path="/var/lib/kubelet/pods/67bb8adf-074e-4394-8188-048248c7bbdc/volumes" Feb 26 10:05:49 crc kubenswrapper[4741]: I0226 10:05:49.594016 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dmdkd/must-gather-xtkwd"] Feb 26 10:05:49 crc kubenswrapper[4741]: I0226 10:05:49.601445 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="copy" containerID="cri-o://073327ca617160c509dba045a43cfbf6c6462b37e031e81500d588be065ca07e" gracePeriod=2 Feb 26 10:05:49 crc kubenswrapper[4741]: I0226 10:05:49.631616 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dmdkd/must-gather-xtkwd"] Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.117407 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dmdkd_must-gather-xtkwd_9c3a910a-f9e9-42cc-894b-d73a7fd35c4d/copy/0.log" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.118570 4741 generic.go:334] "Generic (PLEG): container finished" podID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerID="073327ca617160c509dba045a43cfbf6c6462b37e031e81500d588be065ca07e" exitCode=143 Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.118681 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c89d772ac6bbe48bf365b3306ed74a89aede1e7ce58f7100592743f6b524c8c" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.213913 4741 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dmdkd_must-gather-xtkwd_9c3a910a-f9e9-42cc-894b-d73a7fd35c4d/copy/0.log" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.214413 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.275813 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output\") pod \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.275904 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4bbv\" (UniqueName: \"kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv\") pod \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\" (UID: \"9c3a910a-f9e9-42cc-894b-d73a7fd35c4d\") " Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.282806 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv" (OuterVolumeSpecName: "kube-api-access-k4bbv") pod "9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" (UID: "9c3a910a-f9e9-42cc-894b-d73a7fd35c4d"). InnerVolumeSpecName "kube-api-access-k4bbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.383122 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4bbv\" (UniqueName: \"kubernetes.io/projected/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-kube-api-access-k4bbv\") on node \"crc\" DevicePath \"\"" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.462523 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" (UID: "9c3a910a-f9e9-42cc-894b-d73a7fd35c4d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 10:05:50 crc kubenswrapper[4741]: I0226 10:05:50.485419 4741 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 10:05:51 crc kubenswrapper[4741]: I0226 10:05:51.129200 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dmdkd/must-gather-xtkwd" Feb 26 10:05:51 crc kubenswrapper[4741]: I0226 10:05:51.802898 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" path="/var/lib/kubelet/pods/9c3a910a-f9e9-42cc-894b-d73a7fd35c4d/volumes" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.151083 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535006-94rv4"] Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.152251 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="gather" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152266 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="gather" Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.152276 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="copy" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152282 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="copy" Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.152305 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="extract-utilities" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152313 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="extract-utilities" Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.152327 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="extract-content" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152334 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="extract-content" Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.152348 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152354 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152641 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="gather" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152665 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="67bb8adf-074e-4394-8188-048248c7bbdc" containerName="registry-server" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.152683 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3a910a-f9e9-42cc-894b-d73a7fd35c4d" containerName="copy" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.155948 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.159031 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.159210 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.159252 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.164879 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535006-94rv4"] Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.282923 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrgr\" (UniqueName: \"kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr\") pod \"auto-csr-approver-29535006-94rv4\" (UID: \"2f58c994-ecec-4389-901c-2e81eee4bda9\") " pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.385913 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrgr\" (UniqueName: \"kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr\") pod \"auto-csr-approver-29535006-94rv4\" (UID: \"2f58c994-ecec-4389-901c-2e81eee4bda9\") " pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.408712 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrgr\" (UniqueName: \"kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr\") pod \"auto-csr-approver-29535006-94rv4\" (UID: \"2f58c994-ecec-4389-901c-2e81eee4bda9\") " pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.480127 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:00 crc kubenswrapper[4741]: I0226 10:06:00.788482 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:06:00 crc kubenswrapper[4741]: E0226 10:06:00.789300 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:06:01 crc kubenswrapper[4741]: I0226 10:06:01.022245 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535006-94rv4"] Feb 26 10:06:01 crc kubenswrapper[4741]: I0226 10:06:01.236239 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535006-94rv4" event={"ID":"2f58c994-ecec-4389-901c-2e81eee4bda9","Type":"ContainerStarted","Data":"b7d52448f98e0af0d4da761ed74cfe57fbea7b4345300e33933257d1c8ded923"} Feb 26 10:06:03 crc kubenswrapper[4741]: I0226 10:06:03.261896 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535006-94rv4" event={"ID":"2f58c994-ecec-4389-901c-2e81eee4bda9","Type":"ContainerStarted","Data":"307a1d44471bcba32fce9c122907d1cebb8d58116a54149897c9b59401badb57"} Feb 26 10:06:03 crc kubenswrapper[4741]: I0226 10:06:03.284286 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535006-94rv4" podStartSLOduration=2.334898503 podStartE2EDuration="3.284260939s" podCreationTimestamp="2026-02-26 10:06:00 +0000 UTC" firstStartedPulling="2026-02-26 10:06:01.024946161 +0000 UTC m=+6796.020883548" lastFinishedPulling="2026-02-26 10:06:01.974308597 +0000 UTC m=+6796.970245984" observedRunningTime="2026-02-26 10:06:03.278595188 +0000 UTC m=+6798.274532575" watchObservedRunningTime="2026-02-26 10:06:03.284260939 +0000 UTC m=+6798.280198326" Feb 26 10:06:04 crc kubenswrapper[4741]: I0226 10:06:04.281684 4741 generic.go:334] "Generic (PLEG): container finished" podID="2f58c994-ecec-4389-901c-2e81eee4bda9" containerID="307a1d44471bcba32fce9c122907d1cebb8d58116a54149897c9b59401badb57" exitCode=0 Feb 26 10:06:04 crc kubenswrapper[4741]: I0226 10:06:04.281774 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535006-94rv4" event={"ID":"2f58c994-ecec-4389-901c-2e81eee4bda9","Type":"ContainerDied","Data":"307a1d44471bcba32fce9c122907d1cebb8d58116a54149897c9b59401badb57"} Feb 26 10:06:05 crc kubenswrapper[4741]: I0226 10:06:05.738244 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:05 crc kubenswrapper[4741]: I0226 10:06:05.914888 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrgr\" (UniqueName: \"kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr\") pod \"2f58c994-ecec-4389-901c-2e81eee4bda9\" (UID: \"2f58c994-ecec-4389-901c-2e81eee4bda9\") " Feb 26 10:06:05 crc kubenswrapper[4741]: I0226 10:06:05.922464 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr" (OuterVolumeSpecName: "kube-api-access-bdrgr") pod "2f58c994-ecec-4389-901c-2e81eee4bda9" (UID: "2f58c994-ecec-4389-901c-2e81eee4bda9"). InnerVolumeSpecName "kube-api-access-bdrgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.019250 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrgr\" (UniqueName: \"kubernetes.io/projected/2f58c994-ecec-4389-901c-2e81eee4bda9-kube-api-access-bdrgr\") on node \"crc\" DevicePath \"\"" Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.312559 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535006-94rv4" event={"ID":"2f58c994-ecec-4389-901c-2e81eee4bda9","Type":"ContainerDied","Data":"b7d52448f98e0af0d4da761ed74cfe57fbea7b4345300e33933257d1c8ded923"} Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.312614 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d52448f98e0af0d4da761ed74cfe57fbea7b4345300e33933257d1c8ded923" Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.312660 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535006-94rv4" Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.367287 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535000-47jz6"] Feb 26 10:06:06 crc kubenswrapper[4741]: I0226 10:06:06.384625 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535000-47jz6"] Feb 26 10:06:07 crc kubenswrapper[4741]: I0226 10:06:07.801926 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09171698-0f90-4ea8-ae5f-68ae73081d30" path="/var/lib/kubelet/pods/09171698-0f90-4ea8-ae5f-68ae73081d30/volumes" Feb 26 10:06:11 crc kubenswrapper[4741]: I0226 10:06:11.936391 4741 scope.go:117] "RemoveContainer" containerID="073327ca617160c509dba045a43cfbf6c6462b37e031e81500d588be065ca07e" Feb 26 10:06:11 crc kubenswrapper[4741]: I0226 10:06:11.971893 4741 scope.go:117] "RemoveContainer" containerID="e7a1dfd160d6d8234339b75f41d9b75ffff316c9f2832b4c7558526441390acf" Feb 26 10:06:12 crc kubenswrapper[4741]: I0226 10:06:12.029539 4741 scope.go:117] "RemoveContainer" containerID="e74ebaabc25295a499b890f4e76d17ed316d3bfb65d9afec1a6be27bc8f61803" Feb 26 10:06:13 crc kubenswrapper[4741]: I0226 10:06:13.802368 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:06:13 crc kubenswrapper[4741]: E0226 10:06:13.803960 4741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zqf2s_openshift-machine-config-operator(2c7b5b01-4061-4003-b002-a977260886c5)\"" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" Feb 26 10:06:27 crc kubenswrapper[4741]: I0226 10:06:27.790074 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:06:28 crc kubenswrapper[4741]: I0226 10:06:28.611902 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"f666901765c378f5be8dc17ddd148e4061839ca96958facb7b246f810d5e5c75"} Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.153008 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535008-b682b"] Feb 26 10:08:00 crc kubenswrapper[4741]: E0226 10:08:00.154262 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f58c994-ecec-4389-901c-2e81eee4bda9" containerName="oc" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.154279 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f58c994-ecec-4389-901c-2e81eee4bda9" containerName="oc" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.154509 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f58c994-ecec-4389-901c-2e81eee4bda9" containerName="oc" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.155527 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.160551 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.160912 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.161101 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.166512 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535008-b682b"] Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.298652 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmd4\" (UniqueName: \"kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4\") pod \"auto-csr-approver-29535008-b682b\" (UID: \"0c6ee61c-70ca-49db-a9ef-23201026c1da\") " pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.402318 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmd4\" (UniqueName: \"kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4\") pod \"auto-csr-approver-29535008-b682b\" (UID: \"0c6ee61c-70ca-49db-a9ef-23201026c1da\") " pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.429005 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmd4\" (UniqueName: \"kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4\") pod \"auto-csr-approver-29535008-b682b\" (UID: \"0c6ee61c-70ca-49db-a9ef-23201026c1da\") " pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:00 crc kubenswrapper[4741]: I0226 10:08:00.477880 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:01 crc kubenswrapper[4741]: I0226 10:08:01.326333 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535008-b682b"] Feb 26 10:08:01 crc kubenswrapper[4741]: I0226 10:08:01.336987 4741 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 10:08:01 crc kubenswrapper[4741]: I0226 10:08:01.778635 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535008-b682b" event={"ID":"0c6ee61c-70ca-49db-a9ef-23201026c1da","Type":"ContainerStarted","Data":"7804180cd8f883689c7c4887ac1cd4d4b33f6e15aac144b92660d492cda129fa"} Feb 26 10:08:02 crc kubenswrapper[4741]: I0226 10:08:02.812576 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535008-b682b" event={"ID":"0c6ee61c-70ca-49db-a9ef-23201026c1da","Type":"ContainerStarted","Data":"3b86f7a73dbeb956f3dd7cd22555f81065fe9ade5e37a923c654eae74b7cc5ef"} Feb 26 10:08:02 crc kubenswrapper[4741]: I0226 10:08:02.846956 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535008-b682b" podStartSLOduration=2.063444591 podStartE2EDuration="2.846928569s" podCreationTimestamp="2026-02-26 10:08:00 +0000 UTC" firstStartedPulling="2026-02-26 10:08:01.332976052 +0000 UTC m=+6916.328913439" lastFinishedPulling="2026-02-26 10:08:02.11646003 +0000 UTC m=+6917.112397417" observedRunningTime="2026-02-26 10:08:02.83361269 +0000 UTC m=+6917.829550077" watchObservedRunningTime="2026-02-26 10:08:02.846928569 +0000 UTC m=+6917.842865956" Feb 26 10:08:03 crc kubenswrapper[4741]: I0226 10:08:03.845602 4741 generic.go:334] "Generic (PLEG): container finished" podID="0c6ee61c-70ca-49db-a9ef-23201026c1da" containerID="3b86f7a73dbeb956f3dd7cd22555f81065fe9ade5e37a923c654eae74b7cc5ef" exitCode=0 Feb 26 10:08:03 crc kubenswrapper[4741]: I0226 10:08:03.845966 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535008-b682b" event={"ID":"0c6ee61c-70ca-49db-a9ef-23201026c1da","Type":"ContainerDied","Data":"3b86f7a73dbeb956f3dd7cd22555f81065fe9ade5e37a923c654eae74b7cc5ef"} Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.674797 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.762442 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhmd4\" (UniqueName: \"kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4\") pod \"0c6ee61c-70ca-49db-a9ef-23201026c1da\" (UID: \"0c6ee61c-70ca-49db-a9ef-23201026c1da\") " Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.786202 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4" (OuterVolumeSpecName: "kube-api-access-nhmd4") pod "0c6ee61c-70ca-49db-a9ef-23201026c1da" (UID: "0c6ee61c-70ca-49db-a9ef-23201026c1da"). InnerVolumeSpecName "kube-api-access-nhmd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.867704 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhmd4\" (UniqueName: \"kubernetes.io/projected/0c6ee61c-70ca-49db-a9ef-23201026c1da-kube-api-access-nhmd4\") on node \"crc\" DevicePath \"\"" Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.878636 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535008-b682b" event={"ID":"0c6ee61c-70ca-49db-a9ef-23201026c1da","Type":"ContainerDied","Data":"7804180cd8f883689c7c4887ac1cd4d4b33f6e15aac144b92660d492cda129fa"} Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.878698 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7804180cd8f883689c7c4887ac1cd4d4b33f6e15aac144b92660d492cda129fa" Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.878777 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535008-b682b" Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.919545 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535002-dsw9n"] Feb 26 10:08:05 crc kubenswrapper[4741]: I0226 10:08:05.931899 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535002-dsw9n"] Feb 26 10:08:07 crc kubenswrapper[4741]: I0226 10:08:07.803257 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db97b7d2-03ce-4791-a8ba-92eade2b3dfd" path="/var/lib/kubelet/pods/db97b7d2-03ce-4791-a8ba-92eade2b3dfd/volumes" Feb 26 10:08:12 crc kubenswrapper[4741]: I0226 10:08:12.228780 4741 scope.go:117] "RemoveContainer" containerID="3b3a4231c802faf1ea99dee1ebfa20e0a05c09570ccf48007577e0d52390c7d0" Feb 26 10:08:55 crc kubenswrapper[4741]: I0226 10:08:55.150805 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:08:55 crc kubenswrapper[4741]: I0226 10:08:55.152498 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:09:25 crc kubenswrapper[4741]: I0226 10:09:25.149719 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:09:25 crc kubenswrapper[4741]: I0226 10:09:25.150607 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.149104 4741 patch_prober.go:28] interesting pod/machine-config-daemon-zqf2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.149723 4741 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.149776 4741 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.150958 4741 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f666901765c378f5be8dc17ddd148e4061839ca96958facb7b246f810d5e5c75"} pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.151018 4741 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" podUID="2c7b5b01-4061-4003-b002-a977260886c5" containerName="machine-config-daemon" containerID="cri-o://f666901765c378f5be8dc17ddd148e4061839ca96958facb7b246f810d5e5c75" gracePeriod=600 Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.287729 4741 generic.go:334] "Generic (PLEG): container finished" podID="2c7b5b01-4061-4003-b002-a977260886c5" containerID="f666901765c378f5be8dc17ddd148e4061839ca96958facb7b246f810d5e5c75" exitCode=0 Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.287782 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerDied","Data":"f666901765c378f5be8dc17ddd148e4061839ca96958facb7b246f810d5e5c75"} Feb 26 10:09:55 crc kubenswrapper[4741]: I0226 10:09:55.287828 4741 scope.go:117] "RemoveContainer" containerID="8d15f7b1596e432aacdcfda28211fe535c7695766ff50e2958c1337d2f27d576" Feb 26 10:09:56 crc kubenswrapper[4741]: I0226 10:09:56.303208 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zqf2s" event={"ID":"2c7b5b01-4061-4003-b002-a977260886c5","Type":"ContainerStarted","Data":"cfe2f0e605eb783a8ad27af3a9724e15fa32999f7cd777b5949689ba6da3b445"} Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.167161 4741 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535010-92frk"] Feb 26 10:10:00 crc kubenswrapper[4741]: E0226 10:10:00.168816 4741 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ee61c-70ca-49db-a9ef-23201026c1da" containerName="oc" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.168834 4741 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ee61c-70ca-49db-a9ef-23201026c1da" containerName="oc" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.169129 4741 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ee61c-70ca-49db-a9ef-23201026c1da" containerName="oc" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.170800 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.173469 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.173570 4741 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.173695 4741 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jdbn6" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.183648 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535010-92frk"] Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.355568 4741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpj8m\" (UniqueName: \"kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m\") pod \"auto-csr-approver-29535010-92frk\" (UID: \"4270ce1f-59f6-4b54-ade2-c40e4f35f48b\") " pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.457790 4741 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpj8m\" (UniqueName: \"kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m\") pod \"auto-csr-approver-29535010-92frk\" (UID: \"4270ce1f-59f6-4b54-ade2-c40e4f35f48b\") " pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.479071 4741 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpj8m\" (UniqueName: \"kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m\") pod \"auto-csr-approver-29535010-92frk\" (UID: \"4270ce1f-59f6-4b54-ade2-c40e4f35f48b\") " pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:00 crc kubenswrapper[4741]: I0226 10:10:00.492738 4741 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:01 crc kubenswrapper[4741]: I0226 10:10:01.008018 4741 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535010-92frk"] Feb 26 10:10:01 crc kubenswrapper[4741]: I0226 10:10:01.358838 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535010-92frk" event={"ID":"4270ce1f-59f6-4b54-ade2-c40e4f35f48b","Type":"ContainerStarted","Data":"833e4dd741ab095a7a308994c3cd5f3194ba3ede1af7b234f3f41a9c7521604a"} Feb 26 10:10:02 crc kubenswrapper[4741]: I0226 10:10:02.371288 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535010-92frk" event={"ID":"4270ce1f-59f6-4b54-ade2-c40e4f35f48b","Type":"ContainerStarted","Data":"a5f595f799b78163f31466fd74aed3e6df71bed91350b03745c0e18bb9a79631"} Feb 26 10:10:02 crc kubenswrapper[4741]: I0226 10:10:02.397408 4741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535010-92frk" podStartSLOduration=1.411157034 podStartE2EDuration="2.397383998s" podCreationTimestamp="2026-02-26 10:10:00 +0000 UTC" firstStartedPulling="2026-02-26 10:10:01.013320817 +0000 UTC m=+7036.009258204" lastFinishedPulling="2026-02-26 10:10:01.999547781 +0000 UTC m=+7036.995485168" observedRunningTime="2026-02-26 10:10:02.386673484 +0000 UTC m=+7037.382610871" watchObservedRunningTime="2026-02-26 10:10:02.397383998 +0000 UTC m=+7037.393321385" Feb 26 10:10:03 crc kubenswrapper[4741]: I0226 10:10:03.400345 4741 generic.go:334] "Generic (PLEG): container finished" podID="4270ce1f-59f6-4b54-ade2-c40e4f35f48b" containerID="a5f595f799b78163f31466fd74aed3e6df71bed91350b03745c0e18bb9a79631" exitCode=0 Feb 26 10:10:03 crc kubenswrapper[4741]: I0226 10:10:03.400541 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535010-92frk" event={"ID":"4270ce1f-59f6-4b54-ade2-c40e4f35f48b","Type":"ContainerDied","Data":"a5f595f799b78163f31466fd74aed3e6df71bed91350b03745c0e18bb9a79631"} Feb 26 10:10:04 crc kubenswrapper[4741]: I0226 10:10:04.893560 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.035571 4741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpj8m\" (UniqueName: \"kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m\") pod \"4270ce1f-59f6-4b54-ade2-c40e4f35f48b\" (UID: \"4270ce1f-59f6-4b54-ade2-c40e4f35f48b\") " Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.041966 4741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m" (OuterVolumeSpecName: "kube-api-access-xpj8m") pod "4270ce1f-59f6-4b54-ade2-c40e4f35f48b" (UID: "4270ce1f-59f6-4b54-ade2-c40e4f35f48b"). InnerVolumeSpecName "kube-api-access-xpj8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.139562 4741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpj8m\" (UniqueName: \"kubernetes.io/projected/4270ce1f-59f6-4b54-ade2-c40e4f35f48b-kube-api-access-xpj8m\") on node \"crc\" DevicePath \"\"" Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.428661 4741 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535010-92frk" event={"ID":"4270ce1f-59f6-4b54-ade2-c40e4f35f48b","Type":"ContainerDied","Data":"833e4dd741ab095a7a308994c3cd5f3194ba3ede1af7b234f3f41a9c7521604a"} Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.428971 4741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="833e4dd741ab095a7a308994c3cd5f3194ba3ede1af7b234f3f41a9c7521604a" Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.428743 4741 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535010-92frk" Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.480465 4741 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535004-7mn4j"] Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.492812 4741 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535004-7mn4j"] Feb 26 10:10:05 crc kubenswrapper[4741]: I0226 10:10:05.806539 4741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9eedff-1d03-4612-991c-7022c8f34cea" path="/var/lib/kubelet/pods/6a9eedff-1d03-4612-991c-7022c8f34cea/volumes" Feb 26 10:10:12 crc kubenswrapper[4741]: I0226 10:10:12.473653 4741 scope.go:117] "RemoveContainer" containerID="d1c54b1aee3ba5e719a3c0f03a7df9bb8c2bbee437e3c696c1c6aeac91f10e1b"